home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

6 rows where user = 12339722 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

state 2

  • closed 5
  • open 1

type 1

  • issue 6

repo 1

  • xarray 6
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
232743076 MDU6SXNzdWUyMzI3NDMwNzY= 1437 How can I drop attribute of DataArray wqshen 12339722 closed 0     3 2017-06-01T01:42:59Z 2022-04-06T14:21:15Z 2017-06-01T08:40:03Z NONE      

when i use Dataset.to_netcdf, it raise,

ValueError: cannot serialize coordinates because variable omega already has an attribute 'coordinates' <xarray.DataArray 'omega' (south_north: 252, west_east: 384)> Coordinates: XLONG (south_north, west_east) float32 75.7405 75.9333 76.1263 ... XLAT (south_north, west_east) float32 -1.63264 -1.56052 -1.48872 ... * south_north (south_north) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ... * west_east (west_east) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ... XTIME float32 16560.0 Time datetime64[ns] 1988-01-12T12:00:00 Attributes: FieldType: 104 MemoryOrder: XYZ description: omega units: Pa s-1 stagger: coordinates: XLONG XLAT XTIME projection: LambertConformal(stand_lon=116.0, moad_cen_lat=34.9999961853, truelat1=10.0, ruelat2=50.0, pole_lat=90.0, pole_lon=0.0) vert_interp_type: p

How can i drop attribute coordinates of omega ?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1437/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
833518574 MDU6SXNzdWU4MzM1MTg1NzQ= 5043 open_mfdataset failed to open tarfile filestream when it locates in the context of dask.distributed Client wqshen 12339722 open 0     1 2021-03-17T08:17:47Z 2021-06-20T21:06:23Z   NONE      

Recently, i use open_mfdataset to open a local tar.gz file of multiple netcdf files, it failed to open it and raise a distributed.scheduler.KilledWorker: Error and TypeError: cannot serialize 'ExFileObject' object .

My code is like following,

```python import tarfile from dask.distributed import Client

client = Client()

tar = tarfile.open(my_multiple_netcdf_tar_gz_file) flist = [tar.extractfile(member) for member in tar.getmembers()]

ds = xr.open_mfdataset(flist)

This line will raise Exception

print(ds.MyNcVar.values)

....

blah blah my other client calcualation codes

....

client.close() ```

In above code, the elements of variable flist will be type of ExFileObject, which can't be serialized to distributed.Client cluster and therefore will result in the failure of open_mfdataset .

The reason is xr.open_mfdataset auto convert chunks=None to {} , which will force the method xr.open_dataset to use dask.

We can see in this line of open_mfdataset ,

https://github.com/pydata/xarray/blob/37fe5441c8a2fb981f2c50b8379d7d4f8492ae19/xarray/backends/api.py#L897

```python # Notes this line will force chunks=None into chunks={} and result in the involvement of dask open_kwargs = dict(engine=engine, chunks=chunks or {}, **kwargs)

if parallel:
    import dask

    # wrap the open_dataset, getattr, and preprocess with delayed
    open_ = dask.delayed(open_dataset)
    getattr_ = dask.delayed(getattr)
    if preprocess is not None:
        preprocess = dask.delayed(preprocess)
else:
    open_ = open_dataset
    getattr_ = getattr

datasets = [open_(p, **open_kwargs) for p in paths]
closers = [getattr_(ds, "_close") for ds in datasets]

```

Even if i set the chunks=None, it will be a error cause the chunks always not be None when it is passed into open_dataset .

I think maybe we can keep the chunks value and if anyone want change it, he or she can set it to {} or any other values as they want ?

python open_kwargs = dict(engine=engine, chunks=chunks, **kwargs)

Or may you have a better solution for my problem ?

Also, Thank You for your great jobs on this excellent package.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/5043/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
282746723 MDU6SXNzdWUyODI3NDY3MjM= 1788 Any idea to speed up the open_mfdataset for reading many many big netCDF files? wqshen 12339722 closed 0     3 2017-12-18T02:13:49Z 2018-05-18T15:03:19Z 2018-05-18T15:03:18Z NONE      

I have several WRFout files for 20-year climate simulations, when i use open_mfdataset to read them, it takes me 10 - 20 minutes to finish on my server.

Is there a way to speed up this process? Multiprocessing ??

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1788/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
177754433 MDU6SXNzdWUxNzc3NTQ0MzM= 1008 How to avoid the auto convert variable dtype from float32 to float64 when read netCDF file use open_dataset? wqshen 12339722 closed 0     6 2016-09-19T10:51:29Z 2018-03-28T22:37:00Z 2018-03-28T22:37:00Z NONE      

when i read some netCDF4 file using xr.open_dataset, it seems this method would auto convert the variable type from float32 to float64, how to avoid it ?

Use xarry.open_dataset

python import xarray as xr xr.open_dataset('cat.20151003200633.nc')

will yield output as follow

<xarray.Dataset> Dimensions: (x: 461, y: 461, z: 9) Coordinates: * x (x) float32 -230.0 -229.0 -228.0 -227.0 -226.0 -225.0 -224.0 ... * y (y) float32 -230.0 -229.0 -228.0 -227.0 -226.0 -225.0 -224.0 ... * z (z) float32 0.4 1.4 2.3 3.2 4.3 5.8 9.7 14.5 19.3 Data variables: dbz (z, y, x) float64 nan nan nan nan nan nan nan nan nan nan nan ... vr (z, y, x) float64 nan nan nan nan nan nan nan nan nan nan nan ... sw (z, y, x) float64 nan nan nan nan nan nan nan nan nan nan nan ...

Variables dtype of dbz, vr and sw in this file have been convert to float64, which actually is float32.

Use netCDF4.Dataset

python import netCDF4 as ncf ncf.Dataset('cat.20151003200633.nc')

will yield output as follow

<type 'netCDF4._netCDF4.Dataset'> root group (NETCDF4 data model, file format HDF5): .......... dimensions(sizes): x(461), y(461), z(9) variables(dimensions): float32 x(x), float32 y(y), float32 z(z), float32 dbz(z,y,x), float32 vr(z,y,x), float32 sw(z,y,x) groups:

The netCDF4.Dataset produce the right variable type, while the xarray.open_dataset not.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1008/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
248942085 MDU6SXNzdWUyNDg5NDIwODU= 1505 A problem about xarray.concat wqshen 12339722 closed 0     2 2017-08-09T07:33:50Z 2017-08-10T00:25:07Z 2017-08-10T00:25:07Z NONE      

Hi, today i use xr.concat to concat my wrf model splitted netcdf datasets, in this model output data, it typically has two kinds of meshgrid, includes one stagged meshgrid and mass meshgrid, different variables have located on the different kind of grid.

For example, below is some variables in a dataset, some have dimension west_east and other take west_east_stag, MAPFAC_M (Time, south_north, west_east) float32 1.06697 ... MAPFAC_U (Time, south_north, west_east_stag) float32 1.06697 ... MAPFAC_V (Time, south_north_stag, west_east) float32 1.06698 ... MAPFAC_MX (Time, south_north, west_east) float32 1.06697 ... MAPFAC_MY (Time, south_north, west_east) float32 1.06697 ... MAPFAC_UX (Time, south_north, west_east_stag) float32 1.06697 ... MAPFAC_UY (Time, south_north, west_east_stag) float32 1.06697 ... MAPFAC_VX (Time, south_north_stag, west_east) float32 1.06698 ... MF_VX_INV (Time, south_north_stag, west_east) float32 0.937228 ... MAPFAC_VY (Time, south_north_stag, west_east) float32 1.06698 ... i want concat along dimension west_east and west_east_stag, so that these piece of datasets can be concatenate fully. When i use xr.concat to concatenate these datasets, the argument dim can just use one dimension ?

Is it possible to concat with multiple independent dimensions, like following, python xr.concat([ds_col_0, ds_col_1, ds_col_2], dim=['west_east', 'west_east_stag'])

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1505/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
189094076 MDU6SXNzdWUxODkwOTQwNzY= 1117 Is it possible to modify a variable value in netCDF file? wqshen 12339722 closed 0     1 2016-11-14T11:53:30Z 2016-11-16T16:50:50Z 2016-11-16T16:50:50Z NONE      

Use netCDF4 package, one can modify the netCDF file variable as follow,

python import netCDF4 dset = netCDF4.Dataset('test.nc', 'r+') dset['varname'][:] = 0 dset.close()

Is it possible in xarray?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1117/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 20.288ms · About: xarray-datasette