issues
9 rows where comments = 4, type = "issue" and user = 10194086 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
144630996 | MDU6SXNzdWUxNDQ2MzA5OTY= | 810 | correct DJF mean | mathause 10194086 | closed | 0 | 4 | 2016-03-30T15:36:42Z | 2022-04-06T16:19:47Z | 2016-05-04T12:56:30Z | MEMBER | This started as a question and I add it as reference. Maybe you have a comment. There are several ways to calculate time series of seasonal data (starting from monthly or daily data): ``` load librariesimport pandas as pd import matplotlib.pyplot import numpy as np import xarray as xr Create Example Datasettime = pd.date_range('2000.01.01', '2010.12.31', freq='M') data = np.random.rand(*time.shape) ds = xr.DataArray(data, coords=dict(time=time)) (1) using resampleds_res = ds.resample('Q-FEB', 'time') ds_res = ds_res.sel(time=ds_res['time.month'] == 2) ds_res = ds_res.groupby('time.year').mean('time') (2) this is wrongds_season = ds.where(ds['time.season'] == 'DJF').groupby('time.year').mean('time') (3) using where and rollingmask other months with nands_DJF = ds.where(ds['time.season'] == 'DJF') rolling mean -> only Jan is not nanhowever, we loose Jan/ Feb in the first year and Dec in the lastds_DJF = ds_DJF.rolling(min_periods=3, center=True, time=3).mean() make annual meands_DJF = ds_DJF.groupby('time.year').mean('time') ds_res.plot(marker='*') ds_season.plot() ds_DJF.plot() plt.show() ``` (1) The first is to use resample with 'Q-FEB' as argument. This works fine. It does include Jan/ Feb in the first year, and Dec in the last year + 1. If this makes sense can be debated. One case where this does not work is when you have, say, two regions in your data set, for one you want to calculate DJF and for the other you want NovDecJan. (2) Using 'time.season' is wrong as it combines Jan, Feb and Dec from the same year. (3) The third uses |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/810/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
800118528 | MDU6SXNzdWU4MDAxMTg1Mjg= | 4858 | doctest failure with numpy 1.20 | mathause 10194086 | closed | 0 | 4 | 2021-02-03T08:57:43Z | 2021-02-07T21:57:34Z | 2021-02-07T21:57:34Z | MEMBER | What happened: Our doctests fail since numpy 1.20 came out: https://github.com/pydata/xarray/pull/4760/checks?check_run_id=1818512841#step:8:69 What you expected to happen: They don't ;-) Minimal Complete Verifiable Example: The following fails with numpy 1.20 while it converted ```python import numpy as np x = np.arange(10) x = np.pad(x, 1, "constant", constant_values=np.nan) ``` requires numpy 1.20 Anything else we need to know?:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4858/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
688115687 | MDU6SXNzdWU2ODgxMTU2ODc= | 4385 | warnings from internal use of apply_ufunc | mathause 10194086 | closed | 0 | 4 | 2020-08-28T14:28:56Z | 2020-08-30T16:37:52Z | 2020-08-30T16:37:52Z | MEMBER | Another follow up from #4060: Minimal Complete Verifiable Example:
We should probably check the warnings in the test suite - there may be others. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4385/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
559864146 | MDU6SXNzdWU1NTk4NjQxNDY= | 3750 | isort pre-commit hook does not skip text files | mathause 10194086 | closed | 0 | 4 | 2020-02-04T17:18:31Z | 2020-05-06T01:50:29Z | 2020-03-28T20:58:15Z | MEMBER | MCVE Code SampleAdd arbitrary change to the file
The pre-commit hook will fail. Expected Outputthe pre-commit hook to pass Problem Descriptionrunning |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3750/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
545764524 | MDU6SXNzdWU1NDU3NjQ1MjQ= | 3665 | Cannot roundtrip time in NETCDF4_CLASSIC | mathause 10194086 | closed | 0 | 4 | 2020-01-06T14:47:48Z | 2020-01-16T18:27:15Z | 2020-01-16T18:27:14Z | MEMBER | MCVE Code Sample``` python import numpy as np import xarray as xr time = xr.cftime_range("2006-01-01", periods=2, calendar="360_day") da = xr.DataArray(time, dims=["time"]) da.encoding["dtype"] = np.float da.to_netcdf("tst.nc", format="NETCDF4_CLASSIC") ds = xr.open_dataset("tst.nc") ds.to_netcdf("tst2.nc", format="NETCDF4_CLASSIC") ``` yields:
Or an example without ```python import numpy as np import xarray as xr time = xr.cftime_range("2006-01-01", periods=2, calendar="360_day") da = xr.DataArray(time, dims=["time"]) da.encoding["_FillValue"] = np.array([np.nan]) xr.backends.netcdf3.encode_nc3_variable(xr.conventions.encode_cf_variable(da)) ``` Expected OutputXarray can save the dataset/ an Problem DescriptionIf there is a time variable that can be encoded using integers only, but that has a Note: if the time cannot be encoded using integers only, it works: ``` python da = xr.DataArray(time, dims=["time"]) da.encoding["_FillValue"] = np.array([np.nan]) da.encoding["units"] = "days since 2006-01-01T12:00:00" xr.backends.netcdf3.encode_nc3_variable(xr.conventions.encode_cf_variable(da)) ``` Another note: when saving with NETCDF4 ``` python da = xr.DataArray(time, dims=["time"]) da.encoding["_FillValue"] = np.array([np.nan]) xr.backends.netCDF4_._encode_nc4_variable(xr.conventions.encode_cf_variable(da))
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3665/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
106595746 | MDU6SXNzdWUxMDY1OTU3NDY= | 577 | wrap lon coordinates to 360 | mathause 10194086 | closed | 0 | 4 | 2015-09-15T16:36:37Z | 2019-01-17T09:34:56Z | 2019-01-15T20:15:01Z | MEMBER | Assume I have two datasets with the same lat/ lon grid. However, one has |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/577/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
310819233 | MDU6SXNzdWUzMTA4MTkyMzM= | 2036 | better error message for to_netcdf -> unlimited_dims | mathause 10194086 | closed | 0 | 4 | 2018-04-03T12:39:21Z | 2018-05-18T14:48:32Z | 2018-05-18T14:48:32Z | MEMBER | Code Sample, a copy-pastable example if possible```python Your code hereimport numpy as np import xarray as xr x = np.arange(10) da = xr.Dataset(data_vars=dict(data=('dim1', x)), coords=dict(dim1=('dim1', x), dim2=('dim2', x))) da.to_netcdf('tst.nc', format='NETCDF4_CLASSIC', unlimited_dims='dim1') ``` Problem descriptionThis creates the error The correct syntax is With I only tested with netCDF4 as backend. Expected Output
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2036/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
106581329 | MDU6SXNzdWUxMDY1ODEzMjk= | 576 | define fill value for where | mathause 10194086 | closed | 0 | 4 | 2015-09-15T15:27:32Z | 2017-08-08T17:00:30Z | 2017-08-08T17:00:30Z | MEMBER | It would be nice if
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/576/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
67332234 | MDU6SXNzdWU2NzMzMjIzNA== | 386 | "loosing" virtual variables | mathause 10194086 | closed | 0 | 4 | 2015-04-09T10:35:31Z | 2015-04-20T03:55:44Z | 2015-04-20T03:55:44Z | MEMBER | Once I take a mean over virtual variables, they are not available any more.
Is this intended behaviour? And could I get them back somehow? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/386/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);