issues
5 rows where user = 43126798 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
412180435 | MDU6SXNzdWU0MTIxODA0MzU= | 2780 | Automatic dtype encoding in to_netcdf | nedclimaterisk 43126798 | open | 0 | 4 | 2019-02-19T23:56:48Z | 2022-04-28T19:01:34Z | CONTRIBUTOR | Code Sample, a copy-pastable example if possibleExample from https://stackoverflow.com/questions/49053692/csv-to-netcdf-produces-nc-files-4x-larger-than-the-original-csv ```{python} import pandas as pd import xarray as xr import numpy as np import os Create pandas DataFramedf = pd.DataFrame(np.random.randint(low=0, high=10, size=(100000,5)), columns=['a', 'b', 'c', 'd', 'e']) Make 'e' a column of stringsdf['e'] = df['e'].astype(str) Save to csvdf.to_csv('df.csv') Convert to an xarray's Datasetds = xr.Dataset.from_dataframe(df) Save NetCDF fileds.to_netcdf('ds.nc') Compute statsstats1 = os.stat('df.csv') stats2 = os.stat('ds.nc') print('csv=',str(stats1.st_size)) print('nc =',str(stats2.st_size)) print('nc/csv=',str(stats2.st_size/stats1.st_size)) ``` The result: ```
Problem descriptionNetCDF can store numerical data, as well as some other data, such as categorical data, in a much more efficient way than CSV, do to it's ability to store numbers (integers, limited precision floats) in smaller encodings (e.g. 8 bit integers), as well as it's ability to compress data using zlib. The answers in the stack exchange link at the top of the page give some examples of how this can be done. The second one is particularly useful, and it would be nice if xarray provided an Expected OutputNetCDF from should be equal to or smaller than a CSV full of numerical data in most cases. Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2780/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
366626025 | MDU6SXNzdWUzNjY2MjYwMjU= | 2461 | Set one-dimensional data variable as dimension coordinate? | nedclimaterisk 43126798 | closed | 0 | 13 | 2018-10-04T05:19:25Z | 2019-11-21T23:57:15Z | 2018-10-04T06:22:03Z | CONTRIBUTOR | Code SampleI have this dataset, and I'd like to make it indexable by time:
Problem descriptionI expected to be able to use Expected Output```python ds.set_coords('time', inplace=True) <xarray.Dataset> Dimensions: (station_observations: 46862) Coordinates: time (station_observations) datetime64[ns] ... Dimensions without coordinates: station_observations Data variables: SNOW_ON_THE_GROUND (station_observations) float64 ... ONE_DAY_SNOW (station_observations) float64 ... ONE_DAY_RAIN (station_observations) float64 ... ONE_DAY_PRECIPITATION (station_observations) float64 ... MIN_TEMP (station_observations) float64 ... MAX_TEMP (station_observations) float64 ... Attributes: elevation: 15.0 In [95]: ds.sel(time='1896') ValueError: dimensions or multi-index levels ['time'] do not exist ``` with assign_coords: ```python In [97]: ds=ds.assign_coords(station_observations=ds.time) In [98]: ds.sel(station_observations='1896') Out[98]: <xarray.Dataset> Dimensions: (station_observations: 366) Coordinates: * station_observations (station_observations) datetime64[ns] 1896-01-01 ... Data variables: time (station_observations) datetime64[ns] ... SNOW_ON_THE_GROUND (station_observations) float64 ... ONE_DAY_SNOW (station_observations) float64 ... ONE_DAY_RAIN (station_observations) float64 ... ONE_DAY_PRECIPITATION (station_observations) float64 ... MIN_TEMP (station_observations) float64 ... MAX_TEMP (station_observations) float64 ... Attributes: elevation: 15.0 ``` works correctly, but looks ugly. It would be nice if the time variable could be assigned as a dimension directly. I can drop the time variable and rename the station_observations, but it's a little annoying to do so. Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2461/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
456769766 | MDU6SXNzdWU0NTY3Njk3NjY= | 3026 | Rename dims independently from coords? | nedclimaterisk 43126798 | closed | 0 | 11 | 2019-06-17T06:48:40Z | 2019-07-02T20:24:55Z | 2019-07-02T20:24:55Z | CONTRIBUTOR | I have a dataset that looks like this:
Problem descriptionI would like to be able to rename the dataset dimensions, without renaming the coordinates. Expected Output
As far as I can tell, there is no way to do this. I can
But it doesn't seem to be possible to re-assign the new coordinates as the indexes for the existing dims. In this case, it may seem a bit redundant, because the coordinates are equal to the grid. But I'm trying to get this output to work with code that also deals with other datasets that have non- rectilinear grids. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3026/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
366711310 | MDExOlB1bGxSZXF1ZXN0MjIwMzE2NjY2 | 2463 | Add swap_coords to relevant 'See also' sections | nedclimaterisk 43126798 | closed | 0 | 3 | 2018-10-04T09:58:16Z | 2018-10-23T05:55:45Z | 2018-10-23T05:55:34Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2463 | Minor documentation additions.
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2463/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
371845469 | MDExOlB1bGxSZXF1ZXN0MjI0MTk5OTc1 | 2493 | Include multidimensional stacking groupby in docs | nedclimaterisk 43126798 | closed | 0 | 1 | 2018-10-19T07:55:15Z | 2018-10-23T00:25:45Z | 2018-10-23T00:25:35Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2493 | Include short example of multidimensional stack-groupby-apply-unstack methodology. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2493/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);