issues
2 rows where repo = 13221727 and user = 8161792 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
656982083 | MDU6SXNzdWU2NTY5ODIwODM= | 4224 | wrong time encoding after padding | xzenggit 8161792 | open | 0 | 3 | 2020-07-15T00:46:53Z | 2022-04-29T17:39:17Z | NONE | What happened: If I open a netcdf with default settings (contain a daily time dimension) and then pad with hourly values, even the padded dataset shows hourly time values, the hourly values cannot be saved. I think this is due to the encoding, but I'm not sure how to fix it. What you expected to happen: I expected the final line of code give me ```python array(['2000-01-01T00:00:00.000000000', '2000-01-01T01:00:00.000000000','2000-01-01T02:00:00.000000000', '2000-01-01T03:00:00.000000000','2000-01-01T04:00:00.000000000'], dtype='datetime64[ns]')
array(['2000-01-01T00:00:00.000000000', '2000-01-01T00:00:00.000000000','2000-01-01T00:00:00.000000000', '2000-01-01T00:00:00.000000000','2000-01-01T00:00:00.000000000'], dtype='datetime64[ns]')``` Minimal Complete Verifiable Example: ```python import xarray as xr time = pd.date_range("2000-01-01", freq="1D", periods=365 ) ds = xr.Dataset({"foo": ("time", np.arange(365)), "time": time}) ds.to_netcdf('test5.nc') ds = xr.open_dataset('test5.nc') ds.time.encoding paddingds_hourly = ds.resample(time='1h').pad() ds_hourly.time.values[0:5] array(['2000-01-01T00:00:00.000000000', '2000-01-01T01:00:00.000000000','2000-01-01T02:00:00.000000000', '2000-01-01T03:00:00.000000000','2000-01-01T04:00:00.000000000'], dtype='datetime64[ns]')ds_hourly.to_netcdf('test6.nc') load the padded data fileds_hourly_load = xr.open_dataset('test6.nc') ds_hourly_load.time.values[0:5] array(['2000-01-01T00:00:00.000000000', '2000-01-01T00:00:00.000000000','2000-01-01T00:00:00.000000000', '2000-01-01T00:00:00.000000000','2000-01-01T00:00:00.000000000'], dtype='datetime64[ns]')``` Anything else we need to know?: Environment: xarray version: '0.15.1' Output of <tt>xr.show_versions()</tt> |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4224/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
668905666 | MDU6SXNzdWU2Njg5MDU2NjY= | 4291 | resample function gives 0s instead of NaNs | xzenggit 8161792 | closed | 0 | 3 | 2020-07-30T15:59:32Z | 2020-08-05T16:55:58Z | 2020-08-05T16:55:58Z | NONE | What happened:
When I use What you expected to happen: NaNs should be the correct answer. Minimal Complete Verifiable Example: ```python import xarray as xr dates = pd.date_range('20200101', '20200601', freq='h') data = np.linspace(0, 10, num=len(dates)) data[0:30*24] = np.nan da = xr.DataArray(data, coords=[dates], dims='time') da.plot() Instead of NaNs, the resampled time series in January 20202 give us 0s, which not right.da.resample(time='1d', skipna=True).sum(dim='time', skipna=True).plot() ``` Anything else we need to know?: Did I misunderstand something here? Thanks! Environment: xarray - '0.15.1' Output of <tt>xr.show_versions()</tt>xarray - '0.15.1' |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);