issues
6 rows where user = 1956032 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1064803302 | PR_kwDOAMm_X84vE1Ty | 6031 | Add argopy to ecosystem.rst doc page | gmaze 1956032 | closed | 0 | 1 | 2021-11-26T21:05:39Z | 2021-11-27T13:36:21Z | 2021-11-27T13:36:21Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/6031 | As it says, this is to add the https://github.com/euroargodev/argopy xarray related project to the documentation page. argopy is a python library that aims to ease Argo data access, manipulation and visualisation for standard users as well as Argo experts. argopy comes with a xarray accessor |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6031/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
1064799086 | PR_kwDOAMm_X84vE0ji | 6030 | Add argopy to Related Projects doc page | gmaze 1956032 | closed | 0 | 0 | 2021-11-26T20:53:48Z | 2021-11-26T21:00:02Z | 2021-11-26T21:00:02Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/6030 | As it says, this is to add the https://github.com/euroargodev/argopy xarray related project to the documentation page. argopy is a python library that aims to ease Argo data access, manipulation and visualisation for standard users as well as Argo experts. argopy comes with a xarray accessor |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6030/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
500949040 | MDU6SXNzdWU1MDA5NDkwNDA= | 3361 | Recommendations for domain-specific accessor documentation | gmaze 1956032 | closed | 0 | 6 | 2019-10-01T14:48:39Z | 2020-08-09T22:45:48Z | 2020-08-09T22:45:48Z | CONTRIBUTOR | Hi, I'm currently working on an ocean domain specific accessor for a machine learning technique (https://github.com/gmaze/pyxpcm/tree/nfeatures). I thus wonder whether the xarray/pangeo community has recommendations about how to do this appropriately ? Right now, I simply have an API auto-generated page, see: https://pyxpcm-dev.readthedocs.io/en/latest/api.html#xarray-dataset-accessor-the-pyxpcm-name-space |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3361/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
529501991 | MDExOlB1bGxSZXF1ZXN0MzQ2MzY3ODU3 | 3578 | Add pyXpcm to Related Projects doc page | gmaze 1956032 | closed | 0 | 2 | 2019-11-27T18:06:55Z | 2019-11-27T19:39:07Z | 2019-11-27T19:39:06Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/3578 | As it says, this is just to add the https://github.com/obidam/pyxpcm xarray related project to the documentation page. pyXpcm is a python package to create and work with ocean Profile Classification Model that consumes and produces Xarray objects. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3578/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
275744315 | MDU6SXNzdWUyNzU3NDQzMTU= | 1732 | IndexError when printing dataset from an Argo file | gmaze 1956032 | closed | 0 | 14 | 2017-11-21T15:04:16Z | 2017-11-27T08:21:15Z | 2017-11-25T19:49:24Z | CONTRIBUTOR | Working with a netcdf Argo data file, I encountered the following error: ```python Sample data file here: https://storage.googleapis.com/myargo/sample/4902076_prof.ncargofile = '4902076_prof.nc' ds = xr.open_dataset(argofile) print ds [...full trace below...] Out[]: IndexError: The indexing operation you are attempting to perform is not valid on netCDF4.Variable object. Try loading your data into memory first by calling .load(). Original traceback: Traceback (most recent call last): File "/Users/gmaze/anaconda/envs/obidam/lib/python2.7/site-packages/xarray/backends/netCDF4_.py", line 62, in getitem data = getitem(self.get_array(), key) File "netCDF4/_netCDF4.pyx", line 3961, in netCDF4._netCDF4.Variable.getitem File "netCDF4/_netCDF4.pyx", line 4796, in netCDF4._netCDF4.Variable._get IndexError ``` The error remains the same even if I try to load the data as suggested in the error message. However, I can keep working with the dataset and access variable. This only affects the printing of the ds object. I can't get to determine where in my package updating workflow this really pop out. It used to work very fined up to xarray version 0.9.5. Here I'm using 0.10.0rc1 (see full version details below). It is worth noting that using the 'scipy' engine solves the issue !
Exact trace:
Expected Output:
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1732/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
268725471 | MDU6SXNzdWUyNjg3MjU0NzE= | 1662 | Decoding time according to CF conventions raises error if a NaN is found | gmaze 1956032 | closed | 0 | 4 | 2017-10-26T11:33:44Z | 2017-11-21T14:38:41Z | 2017-11-21T14:38:41Z | CONTRIBUTOR | Working with Argo data, I have difficulties decoding time-related variables: More specifically, it may happens that a variable being a date contains FillValue that are set to NaN at the opening of the netcdf file. That makes the decoding to raise an error. Sure I can open the netcdf file with the decode_times = False option but it's not an issue of being able or not to decode the data, it seems to me to be about how to handle FillValue in a time axis. I understand that with most of gridded datasets, the time axis/dimension/coordinate is full and does not contains missing values, that may be explaining why nobody have reported this before. Here is a simple way to reproduce the error: ``` attrs = {'units': 'days since 1950-01-01 00:00:00 UTC'} # Classic Argo data Julian Day units OK !jd = [24658.46875, 24658.46366898, 24658.47256944] # Sample of Julian date from Argo data ds = xr.Dataset({'time': ('time', jd, attrs)}) print xr.decode_cf(ds) <xarray.Dataset>
Dimensions: (time: 3)
Coordinates:
* time (time) datetime64[ns] 2017-07-06T11:15:00 ...
Data variables:
empty
Not OK with a NaNjd = [24658.46875, 24658.46366898, 24658.47256944, np.NaN] # Another sample of Julian date from Argo data ds = xr.Dataset({'time': ('time', jd, attrs)}) print xr.decode_cf(ds) ValueError: unable to decode time units 'days since 1950-01-01 00:00:00 UTC' with the default calendar. Try opening your dataset with decode_times=False. Full traceback: Traceback (most recent call last): File "/Users/gmaze/anaconda/envs/obidam/lib/python2.7/site-packages/xarray/conventions.py", line 389, in init result = decode_cf_datetime(example_value, units, calendar) File "/Users/gmaze/anaconda/envs/obidam/lib/python2.7/site-packages/xarray/conventions.py", line 157, in decode_cf_datetime dates = _decode_datetime_with_netcdf4(flat_num_dates, units, calendar) File "/Users/gmaze/anaconda/envs/obidam/lib/python2.7/site-packages/xarray/conventions.py", line 99, in _decode_datetime_with_netcdf4 dates = np.asarray(nc4.num2date(num_dates, units, calendar)) File "netCDF4/_netCDF4.pyx", line 5244, in netCDF4._netCDF4.num2date (netCDF4/_netCDF4.c:64839) ValueError: cannot convert float NaN to integer ``` I would expect the decoding to work like in the first case and to simply preserve NaNs where they are. Any ideas or suggestions ? Thanks |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1662/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);