issues
7 rows where state = "closed" and user = 206773 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
258500654 | MDU6SXNzdWUyNTg1MDA2NTQ= | 1576 | Variable of dtype int8 casted to float64 | forman 206773 | closed | 0 | 11 | 2017-09-18T14:28:32Z | 2020-11-09T07:06:31Z | 2020-11-09T07:06:30Z | NONE | I'm using a CF-compliant dataset from the ESA Land Cover CCI Project that contains a variable
If I switch off CF decoding I get the original data type.
I'd actually expect it to be converted to The dataset is available here: ftp://anon-ftp.ceda.ac.uk/neodc/esacci/land_cover/data/land_cover_maps/v1.6.1/ESACCI-LC-L4-LCCS-Map-300m-P5Y-2010-v1.6.1.nc. Note the file is ~3 GB. Btw, the attributes of the variable are
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1576/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
146287030 | MDU6SXNzdWUxNDYyODcwMzA= | 819 | N-D rolling | forman 206773 | closed | 0 | 5 | 2016-04-06T11:42:42Z | 2019-02-27T17:48:20Z | 2019-02-27T17:48:20Z | NONE | Dear xarray Team, We just discovered xarray and it seems to be a fantastic candidate to serve as a core library for our climate data toolbox we are about to implement. While investigating the API we recognized that the
is limited to a single Actually, I also asked myself why the Anyway, thanks for xarray! Regards Norman |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/819/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
165540933 | MDU6SXNzdWUxNjU1NDA5MzM= | 899 | Let open_mfdataset() respect cell boundary variables | forman 206773 | closed | 0 | 5 | 2016-07-14T11:36:49Z | 2019-02-25T19:28:23Z | 2019-02-25T19:28:23Z | NONE | I recently faced a problem with We could solve the problem by using the preprocess argument and turning these data variables into coordinates variables with ds.set_coords('lat_bnds', inplace=True). However it would be nice to prevent concatenation of variables that don't have the concat_dim, e.g. by a keyword argument selective_concat or respect_cell_bnds_vars or so. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
146975644 | MDU6SXNzdWUxNDY5NzU2NDQ= | 822 | value scaling wrong in special cases | forman 206773 | closed | 0 | 13 | 2016-04-08T16:29:33Z | 2019-02-19T02:11:31Z | 2019-02-19T02:11:31Z | NONE | For the same netCDF file used in #821, the value scaling seems to be wrongly applied to compute float64 surface temperature values from a (signed)
Values are roughly -50 to 600 Kelvin instead of 270 to 310 Kelvin. It seems like the problem arises from misinterpreting the signed short raw values in the netCDF file. Here is a notebook that better explains the issue: https://github.com/CCI-Tools/sandbox/blob/4c7a98a4efd1ba55152d2799b499cb27027c2b45/notebooks/norman/xarray-sst-issues.ipynb |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/822/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
321553778 | MDU6SXNzdWUzMjE1NTM3Nzg= | 2109 | Dataset.expand_dims() not lazy | forman 206773 | closed | 0 | 2 | 2018-05-09T12:39:44Z | 2018-05-09T15:45:31Z | 2018-05-09T15:45:31Z | NONE | The following won't come back for a very long time or will fail with an out-of-memory error: ```python
Problem descriptionWhen I call Dataset.expand_dims('time') on one of my ~2GB datasets (compressed), it seems to load all data data into memory, at least memory consumption goes beyond 12GB eventually ending in an out-of-memory exception. (Sorry for the German UI.) Expected Output
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2109/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
258744901 | MDU6SXNzdWUyNTg3NDQ5MDE= | 1579 | Support for unsigned data | forman 206773 | closed | 0 | 3 | 2017-09-19T08:57:15Z | 2017-09-21T15:46:30Z | 2017-09-20T13:15:36Z | NONE | The "old" NetCDF 3 format doesn't have explicit support for unsigned integer types and therefore a recommendation/convention exists to set the variable attribute Are there any plans to interpret the I'd really like to help out, but I fear I still don't know enough about dask to provide an efficient PR for that. My workaround is to manually convert the variables in question which are of type
which results in an |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1579/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
146908323 | MDU6SXNzdWUxNDY5MDgzMjM= | 821 | datetime units interpretation wrong in special cases | forman 206773 | closed | 0 | 3 | 2016-04-08T11:55:44Z | 2016-04-09T16:55:10Z | 2016-04-09T16:54:10Z | NONE | Hi there, I have a datetime issue with a certain type of (CF-compliant!) netCDF files orginating from the ESA CCI Sea Surface Temperature project. With other climate data, everthings seems fine. When I open such a netCDF file, the datetime value(s) of the time dimension seem to be wrong. If I do
I get
The time dimension is
and the time value is Here is the link to the data: ftp://anon-ftp.ceda.ac.uk/neodc/esacci/sst/data/lt/Analysis/L4/v01.1/2010/01/01/20100101120000-ESACCI-L4_GHRSST-SSTdepth-OSTIA-GLOB_LT-v02.0-fv01.1.nc I'm not sure whether this is actually a CF-specific issue with which xarray doesn't want to deal with. If so, could you please give some advice to get arround this. I'm sure other xarray lovers will face this issue sooner or later. Thanks! -- Norman |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/821/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);