issues
3 rows where user = 2853966 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: locked, created_at (date), updated_at (date), closed_at (date)
| id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1577957904 | I_kwDOAMm_X85eDboQ | 7517 | Getting information on netcdf file with unlimited dimensions | oliviermarti 2853966 | open | 0 | 3 | 2023-02-09T14:08:44Z | 2023-04-29T03:40:24Z | NONE | What is your issue?When one reads a netCDF file, there is not solution to determine if there is an unlimited dimension, and determine which one. I really need to be able to handle that. I need to process a variable, and write the result with the exact same informations about dimensions and coordinates, with all attributes and characteristics. Thanks, Olivier ``` import xarray as xr, os print ( '==== Get an example file' ) file = 'tas_Amon_IPSL-CM6A-LR_piControl_r1i1p1f1_gr_185001-234912.nc' h_file = f"https://vesg.ipsl.upmc.fr/thredds/fileServer/cmip6/CMIP/IPSL/IPSL-CM6A-LR/piControl/r1i1p1f1/Amon/tas/gr/v20200326/{file}" print ( '\n ==== Getting file ') os.system ( f"wget --no-clobber {h_file}") print ( '\n ==== File header : this file has an unlimited dimension "time"' ) os.system ( f"ncdump -h {file} | head") dd = xr.open_dataset ( file, decode_times=True, use_cftime=True) xr.set_options ( display_expand_attrs=True) print ( '\n ==== General information : no information about the unlimited dimension(s)' ) print (dd) print ( '\n ==== Dimensions : no information about the unlimited dimension(s)') print ( dd.dims ) print ( '\n === Attributes : no information about the unlimited dimension(s)' ) for attr in dd.attrs : print ( f'{attr} : {dd.attrs[attr]}' ) ``` |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/7517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue | ||||||||
| 1084854762 | I_kwDOAMm_X85AqZHq | 6087 | Computing 'seasonal means' spanning 4 months (with resample or groupy) | oliviermarti 2853966 | closed | 1 | 0 | 2021-12-20T14:27:20Z | 2022-01-21T05:54:39Z | 2022-01-21T05:54:39Z | NONE | Climatologists often use 'seasonal means', i.e. means over 3 months. Useful periods are DJF for December-January-February, MAM, JJA and SON. groupby or resample are nice functions to compute these seasonal means. sea for example : https://xarray.pydata.org/en/stable/examples/monthly-means.html https://stackoverflow.com/questions/59234745/is-there-any-easy-way-to-compute-seasonal-mean-with-xarray But some studies need means over 4 months : DJFM, MAMJ, JJAS and SOND. Would it be feasible that these 4-month periods are recognized by groupby and resample ? For resample, we define 3-month means with a syntax like : resample(time='QS-DEC') A resampling over 4 months is more tricky : it is not a real resampling, as some months are repeated. We still need 4 seasonal values ... Thanks, Olivier |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/6087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
completed | xarray 13221727 | issue | ||||||
| 865003095 | MDU6SXNzdWU4NjUwMDMwOTU= | 5208 | DataArray attributes not present in DataSet. Coherency problem between DataSet and NetCDF file | oliviermarti 2853966 | open | 0 | 4 | 2021-04-22T14:14:15Z | 2021-04-29T22:33:05Z | NONE | When I create a DataSet from DataArrays, attributes are lost. When are create attributes in a DataSet, they are know shown by Below is python code showing the xarray behaviour in details. My requests :
* When creating a DataSet from DataArrays, DataArrays attributes should be incorporated in the DataSet. (maybe optional)
* Attributes present in a DataSet should appear with a Thanks, Olivier ```python !/usr/bin/env pythoncoding: utf-8import numpy as np import xarray as xr Creates DataArraysnt = 4 time = np.arange (nt) * 86400.0 time = xr.DataArray (time, coords=[time,], dims=["time",]) aa = time * 2.0 Adding attributes to DataArraystime.attrs['units'] = "second" aa.attrs['units'] = "whatever" Attributes are visible in the DataArraysprint ('----------> time DataArray: ') print (time) print ('----------> aa DataArray : ' ) print (aa) print ('----------> aa attributes : ') print (aa.attrs ) Creating a Datasetds = xr.Dataset( { "aa": (["time",], aa), }, coords={"time": (["time",], time), }, ) Attributes are not visible in the Datasetprint ('----------> DataSet before setting attributes') print (ds) My request #1 : attributes of the DataArrays should be added to the DataSet (may be optional)print ('----------> Attributes of aa in DataSet : none') print ( ds['aa'].attrs ) print ('----------> Attributes of aa outside DataSet : still here') print ( aa.attrs ) print ('----------> Attributes are not written to the NetCDF file') ds.to_netcdf ('sample1.nc') Adding attributes directly to the Datasetds['time'].attrs['units'] = "second" ds['aa'].attrs['units'] = "whatever" Attributes are still not visible in the Datasetprint ('----------> DataSet after setting attributes : attributes not shown' ) print (ds) My request #2 : attributes added to the DataSet should be printedprint ('----------> But they are written in the NetCDF file') ds.to_netcdf ('sample2.nc') MyRequest : coherency between the DataSet and the NetCDF fileWhat if I read a NetCDF filedt = xr.open_dataset ( 'sample2.nc') print ('----------> DataSet read in a NetCDF file : Attributes are not shown') print (dt) print ('----------> Attributes of aa in DataSet : present') print ( dt['aa'].attrs ) ``` |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/5208/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] (
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[number] INTEGER,
[title] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[state] TEXT,
[locked] INTEGER,
[assignee] INTEGER REFERENCES [users]([id]),
[milestone] INTEGER REFERENCES [milestones]([id]),
[comments] INTEGER,
[created_at] TEXT,
[updated_at] TEXT,
[closed_at] TEXT,
[author_association] TEXT,
[active_lock_reason] TEXT,
[draft] INTEGER,
[pull_request] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[state_reason] TEXT,
[repo] INTEGER REFERENCES [repos]([id]),
[type] TEXT
);
CREATE INDEX [idx_issues_repo]
ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
ON [issues] ([user]);