issues
8 rows where repo = 13221727 and user = 5700886 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: locked, comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
538909075 | MDU6SXNzdWU1Mzg5MDkwNzU= | 3633 | How to ignore non-existing dims given in chunks? | willirath 5700886 | closed | 1 | 2 | 2019-12-17T08:19:45Z | 2023-08-02T19:51:36Z | 2023-08-02T19:51:36Z | CONTRIBUTOR | Is there a way of over-specifying chunks upon opening a dataset without throwing an error? Currently, giving chunk sizes along dimensions that are not present in the dataset fails with a |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3633/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
317620172 | MDU6SXNzdWUzMTc2MjAxNzI= | 2081 | Should `DataArray.to_netcdf` warn if `self.name is None`? | willirath 5700886 | closed | 0 | 1 | 2018-04-25T13:07:26Z | 2019-07-12T02:50:22Z | 2019-07-12T02:50:22Z | CONTRIBUTOR | Currently, Should there be a warning to at least notifies the user that it would be a good idea to pick a decent variable name? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2081/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
207054921 | MDU6SXNzdWUyMDcwNTQ5MjE= | 1263 | xarray.open_mfdataset returns inconsistent times | willirath 5700886 | closed | 0 | 4 | 2017-02-12T14:55:02Z | 2019-02-19T20:47:26Z | 2019-02-19T20:47:26Z | CONTRIBUTOR | ProblemI am running into inconsistent time coordinates with a long climate model experiment that exceeds the limits of Currently, Solution
The latter is equivalent to a workaround I use for the moment: Pass |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
312449001 | MDU6SXNzdWUzMTI0NDkwMDE= | 2043 | How to completely prevent time coordinates from being decoded? | willirath 5700886 | closed | 0 | 1 | 2018-04-09T09:01:35Z | 2018-04-09T10:55:58Z | 2018-04-09T10:55:58Z | CONTRIBUTOR | Minimal exampleThe following shows that creating a time-coordinate with two dates before and after the latest date compatible to ```python In [1]: from datetime import datetime ...: import numpy as np ...: import xarray as xr In [2]: dates = [datetime(year, 1, 1) for year in [2262, 2263]] ...: ds = xr.Dataset(coords={"dates": dates}) In [3]: print(ds.coords["dates"][0]) <xarray.DataArray 'dates' ()> array('2262-01-01T00:00:00.000000000', dtype='datetime64[ns]') Coordinates: dates datetime64[ns] 2262-01-01 In [4]: print(ds.coords["dates"][1]) <xarray.DataArray 'dates' ()> array(datetime.datetime(2263, 1, 1, 0, 0), dtype=object) Coordinates: dates object 2263-01-01 In [5]: ds2 = xr.Dataset({}) In [6]: ds2["dates"] = (["dates", ], dates) In [7]: ds2.coords["dates"][0] Out[7]: <xarray.DataArray 'dates' ()> array('2262-01-01T00:00:00.000000000', dtype='datetime64[ns]') Coordinates: dates datetime64[ns] 2262-01-01 In [8]: ds2.coords["dates"][1] Out[8]: <xarray.DataArray 'dates' ()> array(datetime.datetime(2263, 1, 1, 0, 0), dtype=object) Coordinates: dates object 2263-01-01 ``` Problem descriptionI don't seem to find a way of passing time-coordinates to an xarray Dataset without having them decoded. This is problematic, because it makes it very hard (or impossible?) for a user to make sure a time axis does completely consist of, e.g., Output of xr.show_versions()```python In [5]: xr.show_versions() /home/wrath/miniconda3_20171008/envs/py3_std_course/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters INSTALLED VERSIONS ------------------ commit: None python: 3.6.4.final.0 python-bits: 64 OS: Linux OS-release: 4.13.0-38-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.UTF-8 xarray: 0.10.2 pandas: 0.22.0 numpy: 1.14.2 scipy: 1.0.1 netCDF4: 1.3.1 h5netcdf: 0.5.0 h5py: 2.7.1 Nio: None zarr: 2.2.0 bottleneck: 1.2.1 cyordereddict: None dask: 0.17.2 distributed: 1.21.4 matplotlib: 2.2.2 cartopy: 0.16.0 seaborn: 0.8.1 setuptools: 39.0.1 pip: 9.0.1 conda: None pytest: None IPython: 6.3.1 sphinx: None ``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2043/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
260912521 | MDU6SXNzdWUyNjA5MTI1MjE= | 1596 | Equivalent of numpy.insert for DataSet / DataArray? | willirath 5700886 | closed | 0 | 5 | 2017-09-27T09:48:10Z | 2017-09-27T19:08:59Z | 2017-09-27T18:32:54Z | CONTRIBUTOR | Is there a simple way of inserting, say, a time-step in an BackgroundI have a year of gridded daily data with a few missing time steps. Each existing time step is represented by a file on disk. (To be specific: For 2016, there should be 366 files, but there are only 362.) In many cases, it would be nice to be able to just add masked data whereever a day is missing from the original data. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1596/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
251734482 | MDExOlB1bGxSZXF1ZXN0MTM2ODE1OTQ4 | 1514 | Add `pathlib.Path` support to `open_(mf)dataset` | willirath 5700886 | closed | 0 | 12 | 2017-08-21T18:21:34Z | 2017-09-01T15:31:59Z | 2017-09-01T15:31:52Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1514 |
This is meant to eventually make
Curently, tests with Python 2 are failing, because there is no explicit With Python 3, everything seems to work. I am not happy with the tests I've added so far, though. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1514/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
251714595 | MDExOlB1bGxSZXF1ZXN0MTM2ODAxMTk1 | 1513 | WIP: Add pathlib support to `open_(mf)dataset` | willirath 5700886 | closed | 0 | 1 | 2017-08-21T16:45:26Z | 2017-08-21T17:33:49Z | 2017-08-21T17:26:01Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1513 | This has #799 in mind. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1513/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
207011524 | MDExOlB1bGxSZXF1ZXN0MTA1NzY5NTU1 | 1261 | Allow for plotting dummy netCDF4.datetime objects. | willirath 5700886 | closed | 0 | 9 | 2017-02-11T22:03:33Z | 2017-03-09T21:43:54Z | 2017-03-09T21:43:48Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1261 | Currently, xarray/plot.py raises a This PR adds |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);