issues
2 rows where state = "closed" and user = 22665917 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
277538485 | MDU6SXNzdWUyNzc1Mzg0ODU= | 1745 | open_mfdataset() memory error in v0.10 | nick-weber 22665917 | closed | 0 | 24 | 2017-11-28T21:08:23Z | 2019-01-13T01:51:43Z | 2019-01-13T01:51:43Z | NONE | Code Sample```python import xarray ncfiles = '/example/path/to/wrf/netcdfs/*' dropvars = ['list', 'of', 'many', 'vars', 'to', 'drop'] dset = xarray.open_mfdataset(ncfiles, drop_variables=dropvars, concat_dim='Time', Problem descriptionI am trying to load 73 model (WRF) output files using When I run the above code with v0.9.6, it completes in roughly 7 seconds. But with v0.10, it crashes with the following error:
which, as I understand, means I'm exceeding my memory allocation. Any thoughts on what could be the source of this issue? Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1745/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
374070147 | MDU6SXNzdWUzNzQwNzAxNDc= | 2512 | to_netcdf() fails because of datetime encoding | nick-weber 22665917 | closed | 0 | 2 | 2018-10-25T18:17:47Z | 2018-10-27T16:34:54Z | 2018-10-27T16:34:54Z | NONE | Simple example:```python import numpy as np from datetime import datetime, timedelta import xarray "time" coordinatedt = datetime(1999, 1, 1) dts = np.array([dt + timedelta(days=x) for x in range(10)]) coords = {'time': dts} simple float datadata = np.arange(10) vrbls = {'foo': (('time',), data)} create the Datasetds = xarray.Dataset(vrbls, coords) encode the time coordinateunits = 'days since 1900-01-01' ds.time.encoding['units'] = units write to netcdfds.to_netcdf('test.nc') ``` Problem descriptionWhen I run the above, I get the following error when executing the last line:
The documentation indicates that datetime and datetime64 objects are both supported by xarray and should write to netcdf just fine when supplied "units" for encoding (this code fails with or without the encoding lines). Any Idea when is going wrong here? Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2512/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);