issues
2 rows where type = "issue" and user = 33062222 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
558293655 | MDU6SXNzdWU1NTgyOTM2NTU= | 3739 | ValueError when trying to encode time variable in a NetCDF file with CF convensions | avatar101 33062222 | closed | 0 | 7 | 2020-01-31T18:22:36Z | 2023-09-13T13:45:47Z | 2023-09-13T13:45:46Z | NONE | ```python Importsimport numpy as np import xarray as xr import pandas as pd from glob import glob files to be concatenatedfiles = sorted(glob(path + str(1988) + '/V250*')) corrected datesdates = pd.date_range(start=str(yr), end=str(yr+1), freq='6H', closed='left') ds_test = xr.open_mfdataset(files[:10], combine='nested', concat_dim='time', decode_cf=False) correcting timeds_test.time.values=dates[:10] fixing encodingds_test.time.attrs['units'] = "Seconds since 1970-01-01 00:00:00" preview of the time variableprint(ds_test.time)
ds_test.to_netcdf(path+'test.nc')
``` Expected OutputCorrectly encode Problem DescriptionI'm trying to concatenate ```python More diagnostics on the encodingprint(ds_test.encoding)
checking any existing timeprint(ds_test.time.encoding)
another try on setting time encodingds_test.time.encoding['units'] = "Seconds since 1970-01-01 00:00:00" writing the file gives the same ValueError as aboveds_test.to_netcdf(path+'test.nc') ncdump output of one of the files
// global attributes: :Conventions = "CF" ; :constants_file_name = "P19880101_06" ; :institution = "IACETH" ; :lonmin = -180.f ; :lonmax = 179.5f ; :latmin = -90.f ; :latmax = 90.f ; :levmin = 250.f ; :levmax = 250.f ; :history = "Fri Sep 6 15:59:17 2019: ncatted -a units,time,o,c,hours since 1988-01-01 06:00:00 -a standard_name,time,o,c,time V250_19880101_06" ; :NCO = "4.7.2" ; data: time = 6 ; } ``` Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3739/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
408426920 | MDU6SXNzdWU0MDg0MjY5MjA= | 2758 | Dataset.to_netcdf() results in unexpected encoding parameters for 'netCDF4' backend | avatar101 33062222 | closed | 0 | 2 | 2019-02-09T12:25:54Z | 2019-02-11T09:59:06Z | 2019-02-11T09:59:06Z | NONE | ``` import pandas as pd import xarray as xr from datetime import datetime ds_test2=xr.open_dataset('test_file.nc') ``` ncudmp to show how file looks like<test_file> netcdf test_file { dimensions: lon = 720 ; lev = 1 ; time = 27147 ; variables: float lon(lon) ; lon:_FillValue = NaNf ; float lev(lev) ; lev:_FillValue = NaNf ; lev:long_name = "hybrid level at layer midpoints" ; lev:units = "level" ; lev:standard_name = "hybrid_sigma_pressure" ; lev:positive = "down" ; lev:formula = "hyam hybm (mlev=hyam+hybm*aps)" ; lev:formula_terms = "ap: hyam b: hybm ps: aps" ; int64 time(time) ; time:units = "hours since 2000-01-01 00:00:00" ; time:calendar = "proleptic_gregorian" ; float V(time, lev, lon) ; V:_FillValue = NaNf ; V:units = "m/s" ; } </test_file> ``` std_time = datetime(1970,1,1) timedata = pd.to_datetime(ds_test2.time.values).to_pydatetime() timedata_updated = [(t - std_time).total_seconds() for t in timedata] ds_test2.time.values= timedata_updated ds_test2.time.attrs['units'] = 'Seconds since 01-01-1970 00:00:00 UTC' saving fileds_test2.to_netcdf('/scratch3/mali/data/test/test_V250hov_encoding4_v2.nc', encoding={'V':{'_FillValue': -999.0},'time':{'units': "seconds since 1970-01-01 00:00:00"}})``` ValueError Traceback (most recent call last) <ipython-input-26-04662c00dfc2> in <module> 6 # saving file to netcdf for one combined hov dataset 7 ds_test2.to_netcdf('/scratch3/mali/data/test/test_V250hov_encoding4_v2.nc', ----> 8 encoding={'V':{'_FillValue': -999.0},'time':{'units': "seconds since 1970-01-01 00:00:00"}}) /usr/local/anaconda3/envs/work_env/lib/python3.6/site-packages/xarray/core/dataset.py in to_netcdf(self, path, mode, format, group, engine, encoding, unlimited_dims, compute) 1220 engine=engine, encoding=encoding, 1221 unlimited_dims=unlimited_dims, -> 1222 compute=compute) 1223 1224 def to_zarr(self, store=None, mode='w-', synchronizer=None, group=None, /usr/local/anaconda3/envs/work_env/lib/python3.6/site-packages/xarray/backends/api.py in to_netcdf(dataset, path_or_file, mode, format, group, engine, encoding, unlimited_dims, compute, multifile) 716 # to be parallelized with dask 717 dump_to_store(dataset, store, writer, encoding=encoding, --> 718 unlimited_dims=unlimited_dims) 719 if autoclose: 720 store.close() /usr/local/anaconda3/envs/work_env/lib/python3.6/site-packages/xarray/backends/api.py in dump_to_store(dataset, store, writer, encoder, encoding, unlimited_dims) 759 760 store.store(variables, attrs, check_encoding, writer, --> 761 unlimited_dims=unlimited_dims) 762 763 /usr/local/anaconda3/envs/work_env/lib/python3.6/site-packages/xarray/backends/common.py in store(self, variables, attributes, check_encoding_set, writer, unlimited_dims) 264 self.set_dimensions(variables, unlimited_dims=unlimited_dims) 265 self.set_variables(variables, check_encoding_set, writer, --> 266 unlimited_dims=unlimited_dims) 267 268 def set_attributes(self, attributes): /usr/local/anaconda3/envs/work_env/lib/python3.6/site-packages/xarray/backends/common.py in set_variables(self, variables, check_encoding_set, writer, unlimited_dims) 302 check = vn in check_encoding_set 303 target, source = self.prepare_variable( --> 304 name, v, check, unlimited_dims=unlimited_dims) 305 306 writer.add(source, target) /usr/local/anaconda3/envs/work_env/lib/python3.6/site-packages/xarray/backends/netCDF4_.py in prepare_variable(self, name, variable, check_encoding, unlimited_dims) 448 encoding = _extract_nc4_variable_encoding( 449 variable, raise_on_invalid=check_encoding, --> 450 unlimited_dims=unlimited_dims) 451 if name in self.ds.variables: 452 nc4_var = self.ds.variables[name] /usr/local/anaconda3/envs/work_env/lib/python3.6/site-packages/xarray/backends/netCDF4_.py in _extract_nc4_variable_encoding(variable, raise_on_invalid, lsd_okay, h5py_okay, backend, unlimited_dims) 223 if invalid: 224 raise ValueError('unexpected encoding parameters for %r backend: ' --> 225 ' %r' % (backend, invalid)) 226 else: 227 for k in list(encoding): ValueError: unexpected encoding parameters for 'netCDF4' backend: ['units'] ``` Problem descriptionI'm trying to change the time attributes becaues in the workflow there are some scripts which are not in python and would like the time to start from a specific year. I've written the code to calculate seconds from a specific standard time. Later on, I realised that I don't need to do that as xarray takes care of that when saving the data when specified with that encoding parameter. Strange thing is the writng the file by above approach is giving me an error however, when I just read in the same file and save it with the same encoding as above and without changing the time values manually, it works fine. Here's what I mean: ``` ds_test3 = xr.open_dataset('test_file.nc') ## same file as before saving directly without doing any calculations like beforeds_test3.to_netcdf('/scratch3/mali/data/test/test_V250hov_encoding4_v2.nc', encoding={'V':{'_FillValue': -999.0},'time':{'units': "seconds since 1970-01-01 00:00:00"}} above code works fine``` Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2758/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);