home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

5 rows where user = 43126798 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 3
  • pull 2

state 2

  • closed 4
  • open 1

repo 1

  • xarray 5
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
412180435 MDU6SXNzdWU0MTIxODA0MzU= 2780 Automatic dtype encoding in to_netcdf nedclimaterisk 43126798 open 0     4 2019-02-19T23:56:48Z 2022-04-28T19:01:34Z   CONTRIBUTOR      

Code Sample, a copy-pastable example if possible

Example from https://stackoverflow.com/questions/49053692/csv-to-netcdf-produces-nc-files-4x-larger-than-the-original-csv

```{python} import pandas as pd import xarray as xr import numpy as np import os

Create pandas DataFrame

df = pd.DataFrame(np.random.randint(low=0, high=10, size=(100000,5)), columns=['a', 'b', 'c', 'd', 'e'])

Make 'e' a column of strings

df['e'] = df['e'].astype(str)

Save to csv

df.to_csv('df.csv')

Convert to an xarray's Dataset

ds = xr.Dataset.from_dataframe(df)

Save NetCDF file

ds.to_netcdf('ds.nc')

Compute stats

stats1 = os.stat('df.csv') stats2 = os.stat('ds.nc') print('csv=',str(stats1.st_size)) print('nc =',str(stats2.st_size)) print('nc/csv=',str(stats2.st_size/stats1.st_size)) ```

The result:

```

csv = 1688902 bytes nc = 6432441 bytes nc/csv = 3.8086526038811015 ```

Problem description

NetCDF can store numerical data, as well as some other data, such as categorical data, in a much more efficient way than CSV, do to it's ability to store numbers (integers, limited precision floats) in smaller encodings (e.g. 8 bit integers), as well as it's ability to compress data using zlib.

The answers in the stack exchange link at the top of the page give some examples of how this can be done. The second one is particularly useful, and it would be nice if xarray provided an encoding={'dtype': 'auto'} option to automatically select the most compact dtype able to store a given DataArray before saving the file.

Expected Output

NetCDF from should be equal to or smaller than a CSV full of numerical data in most cases.

Output of xr.show_versions()

``` INSTALLED VERSIONS ------------------ commit: None python: 3.6.7 | packaged by conda-forge | (default, Nov 21 2018, 03:09:43) [GCC 7.3.0] python-bits: 64 OS: Linux OS-release: 4.18.0-15-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_AU.UTF-8 LOCALE: en_AU.UTF-8 libhdf5: 1.10.4 libnetcdf: 4.6.2 xarray: 0.11.3 pandas: 0.24.1 netCDF4: 1.4.2 h5netcdf: 0.6.2 h5py: 2.9.0 dask: 1.0.0 ```
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2780/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
366626025 MDU6SXNzdWUzNjY2MjYwMjU= 2461 Set one-dimensional data variable as dimension coordinate? nedclimaterisk 43126798 closed 0     13 2018-10-04T05:19:25Z 2019-11-21T23:57:15Z 2018-10-04T06:22:03Z CONTRIBUTOR      

Code Sample

I have this dataset, and I'd like to make it indexable by time:

python <xarray.Dataset> Dimensions: (station_observations: 46862) Dimensions without coordinates: station_observations Data variables: time (station_observations) datetime64[ns] ... SNOW_ON_THE_GROUND (station_observations) float64 ... ONE_DAY_SNOW (station_observations) float64 ... ONE_DAY_RAIN (station_observations) float64 ... ONE_DAY_PRECIPITATION (station_observations) float64 ... MIN_TEMP (station_observations) float64 ... MAX_TEMP (station_observations) float64 ... Attributes: elevation: 15.0

Problem description

I expected to be able to use ds.set_coords to make the time variable an indexable coordinate. The variable IS converted to a coordinate, but it is not a dimension coordinate, so I can't index with it. I can use assign_coords(station_observations=ds.time) to make station_observations indexable by time, but then the name in semantically wrong, and the time variable still exists, which makes the code harder to maintain.

Expected Output

```python ds.set_coords('time', inplace=True) <xarray.Dataset> Dimensions: (station_observations: 46862) Coordinates: time (station_observations) datetime64[ns] ... Dimensions without coordinates: station_observations Data variables: SNOW_ON_THE_GROUND (station_observations) float64 ... ONE_DAY_SNOW (station_observations) float64 ... ONE_DAY_RAIN (station_observations) float64 ... ONE_DAY_PRECIPITATION (station_observations) float64 ... MIN_TEMP (station_observations) float64 ... MAX_TEMP (station_observations) float64 ... Attributes: elevation: 15.0

In [95]: ds.sel(time='1896') ValueError: dimensions or multi-index levels ['time'] do not exist ```

with assign_coords:

```python In [97]: ds=ds.assign_coords(station_observations=ds.time)

In [98]: ds.sel(station_observations='1896') Out[98]: <xarray.Dataset> Dimensions: (station_observations: 366) Coordinates: * station_observations (station_observations) datetime64[ns] 1896-01-01 ... Data variables: time (station_observations) datetime64[ns] ... SNOW_ON_THE_GROUND (station_observations) float64 ... ONE_DAY_SNOW (station_observations) float64 ... ONE_DAY_RAIN (station_observations) float64 ... ONE_DAY_PRECIPITATION (station_observations) float64 ... MIN_TEMP (station_observations) float64 ... MAX_TEMP (station_observations) float64 ... Attributes: elevation: 15.0 ``` works correctly, but looks ugly. It would be nice if the time variable could be assigned as a dimension directly. I can drop the time variable and rename the station_observations, but it's a little annoying to do so.

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.6.final.0 python-bits: 64 OS: Linux OS-release: 4.16.0-041600-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_AU.UTF-8 LOCALE: en_AU.UTF-8 xarray: 0.10.2 pandas: 0.22.0 numpy: 1.13.3 scipy: 0.19.1 netCDF4: 1.3.1 h5netcdf: None h5py: None Nio: None zarr: None bottleneck: 1.2.0 cyordereddict: None dask: 0.16.0 distributed: None matplotlib: 2.1.1 cartopy: None seaborn: None setuptools: 39.0.1 pip: 9.0.1 conda: None pytest: None IPython: 5.5.0 sphinx: None
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2461/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
456769766 MDU6SXNzdWU0NTY3Njk3NjY= 3026 Rename dims independently from coords? nedclimaterisk 43126798 closed 0     11 2019-06-17T06:48:40Z 2019-07-02T20:24:55Z 2019-07-02T20:24:55Z CONTRIBUTOR      

I have a dataset that looks like this:

python <xarray.Dataset> Dimensions: (lat: 226, lon: 261, time: 7300) Coordinates: * lat (lat) float32 -32.0 -31.9 -31.8 -31.7 ... -9.700001 -9.6 -9.5 * lon (lon) float32 132.0 132.1 132.2 132.3 ... 157.7 157.8 157.9 158.0 * time (time) object 1980-01-01 15:00:00 ... 1999-12-31 15:00:00 Data variables: rnd24 (time, lat, lon) float32 ...

Problem description

I would like to be able to rename the dataset dimensions, without renaming the coordinates.

Expected Output

python <xarray.Dataset> Dimensions: (y: 226, x: 261, time: 7300) Coordinates: * lat (y) float32 -32.0 -31.9 -31.8 -31.7 ... -9.700001 -9.6 -9.5 * lon (x) float32 132.0 132.1 132.2 132.3 ... 157.7 157.8 157.9 158.0 * time (time) object 1980-01-01 15:00:00 ... 1999-12-31 15:00:00 Data variables: rnd24 (time, y, x) float32 ...

As far as I can tell, there is no way to do this. I can rename the existing dims/coords to x/y, and then manually create new coordinates that are copies of x and y, which gets me to:

python <xarray.Dataset> Dimensions: (time: 7300, x: 261, y: 226) Coordinates: * y (y) float32 -32.0 -31.9 -31.8 -31.7 ... -9.700001 -9.6 -9.5 * x (x) float32 132.0 132.1 132.2 132.3 ... 157.7 157.8 157.9 158.0 * time (time) object 1980-01-01 15:00:00 ... 1999-12-31 15:00:00 lat (y) float32 -32.0 -31.9 -31.8 -31.7 ... -9.700001 -9.6 -9.5 lon (x) float32 132.0 132.1 132.2 132.3 ... 157.7 157.8 157.9 158.0 Data variables: rnd24 (time, y, x) float32 ...

But it doesn't seem to be possible to re-assign the new coordinates as the indexes for the existing dims.

In this case, it may seem a bit redundant, because the coordinates are equal to the grid. But I'm trying to get this output to work with code that also deals with other datasets that have non- rectilinear grids.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3026/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
366711310 MDExOlB1bGxSZXF1ZXN0MjIwMzE2NjY2 2463 Add swap_coords to relevant 'See also' sections nedclimaterisk 43126798 closed 0     3 2018-10-04T09:58:16Z 2018-10-23T05:55:45Z 2018-10-23T05:55:34Z CONTRIBUTOR   0 pydata/xarray/pulls/2463

Minor documentation additions.

  • [ ] Closes #2461
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2463/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
371845469 MDExOlB1bGxSZXF1ZXN0MjI0MTk5OTc1 2493 Include multidimensional stacking groupby in docs nedclimaterisk 43126798 closed 0     1 2018-10-19T07:55:15Z 2018-10-23T00:25:45Z 2018-10-23T00:25:35Z CONTRIBUTOR   0 pydata/xarray/pulls/2493

Include short example of multidimensional stack-groupby-apply-unstack methodology.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2493/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 26.262ms · About: xarray-datasette