issues
6 rows where user = 12339722 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
232743076 | MDU6SXNzdWUyMzI3NDMwNzY= | 1437 | How can I drop attribute of DataArray | wqshen 12339722 | closed | 0 | 3 | 2017-06-01T01:42:59Z | 2022-04-06T14:21:15Z | 2017-06-01T08:40:03Z | NONE | when i use Dataset.to_netcdf, it raise,
How can i drop attribute coordinates of omega ? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1437/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
833518574 | MDU6SXNzdWU4MzM1MTg1NzQ= | 5043 | open_mfdataset failed to open tarfile filestream when it locates in the context of dask.distributed Client | wqshen 12339722 | open | 0 | 1 | 2021-03-17T08:17:47Z | 2021-06-20T21:06:23Z | NONE | Recently, i use My code is like following, ```python import tarfile from dask.distributed import Client client = Client() tar = tarfile.open(my_multiple_netcdf_tar_gz_file) flist = [tar.extractfile(member) for member in tar.getmembers()] ds = xr.open_mfdataset(flist) This line will raise Exceptionprint(ds.MyNcVar.values) ....blah blah my other client calcualation codes....client.close() ``` In above code, the elements of variable The reason is We can see in this line of ```python # Notes this line will force chunks=None into chunks={} and result in the involvement of dask open_kwargs = dict(engine=engine, chunks=chunks or {}, **kwargs)
``` Even if i set the chunks=None, it will be a error cause the I think maybe we can keep the chunks value and if anyone want change it,
he or she can set it to
Or may you have a better solution for my problem ? Also, Thank You for your great jobs on this excellent package. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5043/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
282746723 | MDU6SXNzdWUyODI3NDY3MjM= | 1788 | Any idea to speed up the open_mfdataset for reading many many big netCDF files? | wqshen 12339722 | closed | 0 | 3 | 2017-12-18T02:13:49Z | 2018-05-18T15:03:19Z | 2018-05-18T15:03:18Z | NONE | I have several WRFout files for 20-year climate simulations, when i use open_mfdataset to read them, it takes me 10 - 20 minutes to finish on my server. Is there a way to speed up this process? Multiprocessing ?? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1788/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
177754433 | MDU6SXNzdWUxNzc3NTQ0MzM= | 1008 | How to avoid the auto convert variable dtype from float32 to float64 when read netCDF file use open_dataset? | wqshen 12339722 | closed | 0 | 6 | 2016-09-19T10:51:29Z | 2018-03-28T22:37:00Z | 2018-03-28T22:37:00Z | NONE |
Use xarry.open_dataset
will yield output as follow
Variables dtype of dbz, vr and sw in this file have been convert to float64, which actually is float32. Use netCDF4.Dataset
will yield output as follow
The netCDF4.Dataset produce the right variable type, while the xarray.open_dataset not. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1008/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
248942085 | MDU6SXNzdWUyNDg5NDIwODU= | 1505 | A problem about xarray.concat | wqshen 12339722 | closed | 0 | 2 | 2017-08-09T07:33:50Z | 2017-08-10T00:25:07Z | 2017-08-10T00:25:07Z | NONE | Hi, today i use xr.concat to concat my wrf model splitted netcdf datasets, in this model output data, it typically has two kinds of meshgrid, includes one stagged meshgrid and mass meshgrid, different variables have located on the different kind of grid. For example, below is some variables in a dataset, some have dimension Is it possible to concat with multiple independent dimensions, like following,
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1505/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
189094076 | MDU6SXNzdWUxODkwOTQwNzY= | 1117 | Is it possible to modify a variable value in netCDF file? | wqshen 12339722 | closed | 0 | 1 | 2016-11-14T11:53:30Z | 2016-11-16T16:50:50Z | 2016-11-16T16:50:50Z | NONE | Use netCDF4 package, one can modify the netCDF file variable as follow,
Is it possible in xarray? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1117/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);