issues
4 rows where type = "issue" and user = 23199378 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
467736580 | MDU6SXNzdWU0Njc3MzY1ODA= | 3109 | In the contribution instructions, the py36.yml fails to set up | mmartini-usgs 23199378 | closed | 0 | 2 | 2019-07-13T15:55:23Z | 2022-04-09T02:05:48Z | 2022-04-09T02:05:48Z | NONE | Code Sample, a copy-pastable example if possibleconda env create -f ci/requirements/py36.yml Problem descriptionIn the contribution instructions, the py36.yml fails to set up, so the test environment does nto get created Expected OutputA test environment Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3109/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
261403591 | MDU6SXNzdWUyNjE0MDM1OTE= | 1598 | Need better user control of _FillValue attribute in NetCDF files | mmartini-usgs 23199378 | closed | 0 | 9 | 2017-09-28T17:44:20Z | 2017-10-26T05:19:30Z | 2017-10-26T05:19:30Z | NONE | This issue is under discussion here: https://github.com/pydata/xarray/pull/1165 It is not desirable for us to have _FillValue = NaN for dimensions and coordinate variables. In trying to use xarray, _FillValue was carefully kept from these variables and dimensions during the creation of the un-resampled file and then were found to appear during the to_netcdf operation. This happens in spite of mask_and_scale=False is being used with xr.open_dataset I would hope that downstream code would have trouble with coordinates that don't make logical sense (time or place being NaN, for instance). We would prefer NOT to instantiate coordinate variable data with any fill value. Keeping NaNs out of coordinate variables, dimensions and minima and maxima is part of our QA/QC process to avoid downstream issues. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1598/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
256243636 | MDU6SXNzdWUyNTYyNDM2MzY= | 1562 | Request: implement unsigned integer type for xarray resample skipna | mmartini-usgs 23199378 | closed | 0 | 2 | 2017-09-08T12:48:53Z | 2017-09-08T16:13:46Z | 2017-09-08T16:12:23Z | NONE | I would like to be able to use the skipna switch with unsigned integer types in netCDF4 files I'm processing with xarray. Currently it appears to be unsupported: ```~\AppData\Local\Continuum\Miniconda3\envs\IOOS3\lib\site-packages\xarray\core\duck_array_ops.py in f(values, axis, skipna, **kwargs) 184 raise NotImplementedError( 185 'skipna=True not yet implemented for %s with dtype %s' --> 186 % (name, values.dtype)) 187 nanname = 'nan' + name 188 if (isinstance(axis, tuple) or not values.dtype.isnative or NotImplementedError: skipna=True not yet implemented for mean with dtype uint32 ``` Thanks, Marinna |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1562/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
253407851 | MDU6SXNzdWUyNTM0MDc4NTE= | 1534 | to_dataframe (pandas) usage question | mmartini-usgs 23199378 | closed | 0 | 6 | 2017-08-28T18:02:56Z | 2017-09-07T08:00:41Z | 2017-09-07T08:00:41Z | NONE | Apologies for what is probably a very newbie question: If I convert such a large file to pandas using to_dataframe() to gain access to more pandas methods, will I lose the speed and dask capabillity that is so wonderful in xarray? I have a very large netCDF file (3 GB with 3 Million data points of 1-2 Hz ADCP data) that needs to be reduced to hourly or 10 min averages. xarray is perfect for this. I am exploring resample and other methods. It is amazingly fast doing this:
And an offset of about half a day is introduced to the data. Probably user error or due to filtering. To figure this out, I am looking at using resample in pandas directly, or multindexing and reshaping using methods that are not inherited from pandas by xarray, then back to xarray using to_xarray. I will also need to be masking data (and other things pandas can do) during a QA/QC process. It appears that pandas can do masking and xarray does not inherit masking? Am I understanding the relationship between xarray and pandas correctly? Thanks, Marinna |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1534/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);