issues
5 rows where user = 1554921 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, updated_at, closed_at, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
334633212 | MDU6SXNzdWUzMzQ2MzMyMTI= | 2242 | to_netcdf(compute=False) can be slow | neishm 1554921 | closed | 0 | 5 | 2018-06-21T19:50:36Z | 2019-01-13T21:13:28Z | 2019-01-13T21:13:28Z | CONTRIBUTOR | Code Sample```python import xarray as xr from dask.array import ones import dask from dask.diagnostics import ProgressBar ProgressBar().register() Define a mock DataSetdset = {} for i in range(5): name = 'var'+str(i) data = i*ones((8,79,200,401),dtype='f4',chunks=(1,1,200,401)) var = xr.DataArray(data=data, dims=('time','level','lat','lon'), name=name) dset[name] = var dset = xr.Dataset(dset) Single thread to facilitate debugging.(may require dask < 0.18)with dask.set_options(get=dask.get): # This works fine. print ("Testing immediate netCDF4 writing") dset.to_netcdf("test1.nc") # This can be twice as slow as the version above. # Can be even slower (like 10x slower) on a shared filesystem. print ("Testing delayed netCDF4 writing") dset.to_netcdf("test2.nc",compute=False).compute() ``` Problem descriptionUsing the delayed version of Is there a reason for the repeated open/close cycles (e.g. #1198?), or can this behaviour be fixed so the file stays open for the duration of the Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2242/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
336729475 | MDExOlB1bGxSZXF1ZXN0MTk4MTEzNTQ2 | 2257 | Write inconsistent chunks to netcdf | neishm 1554921 | closed | 0 | 2 | 2018-06-28T18:23:55Z | 2018-06-29T13:52:15Z | 2018-06-29T05:07:27Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/2257 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2257/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
336273865 | MDU6SXNzdWUzMzYyNzM4NjU= | 2254 | Writing Datasets to netCDF4 with "inconsistent" chunks | neishm 1554921 | closed | 0 | 3 | 2018-06-27T15:15:02Z | 2018-06-29T05:07:27Z | 2018-06-29T05:07:27Z | CONTRIBUTOR | Code Sample```python import xarray as xr from dask.array import zeros, ones Construct two variables with the same dimensions, but different chunkingx = zeros((100,100),dtype='f4',chunks=(50,100)) x = xr.DataArray(data=x, dims=('lat','lon'), name='x') y = ones((100,100),dtype='f4',chunks=(100,50)) y = xr.DataArray(data=y, dims=('lat','lon'), name='y') Put them both into the same datasetdset = xr.Dataset({'x':x,'y':y}) Save to a netCDF4 file.dset.to_netcdf("test.nc") ``` The last line results in
Problem descriptionThis error is triggered by I'm assuming If I define a more general check
Is this change as straight-forward as I think, or Is there something intrinsic about Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
279832457 | MDU6SXNzdWUyNzk4MzI0NTc= | 1763 | Multi-dimensional coordinate mixup when writing to netCDF | neishm 1554921 | closed | 0 | 4 | 2017-12-06T17:05:36Z | 2018-01-11T16:54:48Z | 2018-01-11T16:54:48Z | CONTRIBUTOR | Problem descriptionUnder certain conditions, the netCDF files produced by Test DatasetSome sample code to generate a problematic Dataset: ```python import xarray as xr import numpy as np zeros1 = np.zeros((5,3))
zeros2 = np.zeros((6,3))
zeros3 = np.zeros((5,4))
d = xr.Dataset({
'lon1': (['x1','y1'], zeros1, {}),
'lon2': (['x2','y1'], zeros2, {}),
'lon3': (['x1','y2'], zeros3, {}),
'lat1': (['x1','y1'], zeros1, {}),
'lat2': (['x2','y1'], zeros2, {}),
'lat3': (['x1','y2'], zeros3, {}),
'foo1': (['x1','y1'], zeros1, {'coordinates': 'lon1 lat1'}),
'foo2': (['x2','y1'], zeros2, {'coordinates': 'lon2 lat2'}),
'foo3': (['x1','y2'], zeros3, {'coordinates': 'lon3 lat3'}),
})
d = xr.conventions.decode_cf(d)
<xarray.Dataset>
Dimensions: (x1: 5, x2: 6, y1: 3, y2: 4)
Coordinates:
lat1 (x1, y1) float64 ...
lat3 (x1, y2) float64 ...
lat2 (x2, y1) float64 ...
lon1 (x1, y1) float64 ...
lon3 (x1, y2) float64 ...
lon2 (x2, y1) float64 ...
Dimensions without coordinates: x1, x2, y1, y2
Data variables:
foo1 (x1, y1) float64 ...
foo2 (x2, y1) float64 ...
foo3 (x1, y2) float64 ...
<xarray.DataArray 'foo1' (x1: 5, y1: 3)> array([[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]]) Coordinates: lat1 (x1, y1) float64 ... lon1 (x1, y1) float64 ... Dimensions without coordinates: x1, y1
<xarray.DataArray 'foo2' (x2: 6, y1: 3)> array([[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.], [ 0., 0., 0.]]) Coordinates: lat2 (x2, y1) float64 ... lon2 (x2, y1) float64 ... Dimensions without coordinates: x2, y1
<xarray.DataArray 'foo3' (x1: 5, y2: 4)> array([[ 0., 0., 0., 0.], [ 0., 0., 0., 0.], [ 0., 0., 0., 0.], [ 0., 0., 0., 0.], [ 0., 0., 0., 0.]]) Coordinates: lat3 (x1, y2) float64 ... lon3 (x1, y2) float64 ... Dimensions without coordinates: x1, y2 ``` The problemThe problem happens when I try to write this to netCDF (using either the netCDF4 or scipy engines):
// global attributes: :_NCProperties = "version=1|netcdflibversion=4.4.1.1|hdf5libversion=1.8.18" ; } ``` Here, foo1, foo2, and foo3 have extra coordinates associated with them. Interestingly, if I re-open this netCDF file with Expected OutputI would expect the netCDF file to have a single pair of lat/lon for each variable:
Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1763/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
280274296 | MDExOlB1bGxSZXF1ZXN0MTU3MDk4NTY0 | 1768 | Fix multidimensional coordinates | neishm 1554921 | closed | 0 | 2 | 2017-12-07T20:50:33Z | 2018-01-11T16:54:48Z | 2018-01-11T16:54:48Z | CONTRIBUTOR | 0 | pydata/xarray/pulls/1768 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1768/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);