issues
3 rows where repo = 13221727, state = "closed" and user = 16655388 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
368833116 | MDU6SXNzdWUzNjg4MzMxMTY= | 2478 | masked_array write/read differences between xarray and netCDF4 | sbiner 16655388 | closed | 0 | 3 | 2018-10-10T20:12:19Z | 2023-09-13T12:41:03Z | 2023-09-13T12:41:02Z | NONE | Here is code used to read/write a masked_array with the netCDF4 and xarray modules. As seen if you run the code, for 3 cases the masked_value is read as a np.nan. However, for the netcdf file written by netCDF4 and read by xarray, the masked_value is the default _FillValue of 9.96920997e+36. I wonder if this is expected or if I am doing something wrong. ```python import xarray as xr import netCDF4 as nc import numpy as np import os data = np.ma.array([1.,2.], mask = [True, False]) create file with netcdf$nc_file = 'ncfile.nc' if os.path.exists(nc_file): os.remove(nc_file) ds = nc.Dataset(nc_file, 'w') ds.createDimension('dim1', 2) var = ds.createVariable('data', 'f8', dimensions = ('dim1')) var[:] = data ds.close() create file with xarrayda = xr.DataArray(data, name = 'data', dims = {'dim1':2}) nc_file = 'xrfile.nc' if os.path.exists(nc_file): os.remove(nc_file) da.to_netcdf(nc_file, 'w') da.close() print('original data: {}'.format(data)) da = xr.open_dataset('ncfile.nc').data print('data from nc read by xr: {}'.format(da.values)) da = xr.open_dataset('xrfile.nc').data print('data from xr read by xr: {}'.format(da.values)) data = nc.Dataset('ncfile.nc').variables['data'][:] print('data from nc read by nc: {}'.format(da.values)) data = nc.Dataset('xrfile.nc').variables['data'][:] print('data from xr read by nc: {}'.format(da.values)) print('done') ``` Here is the output I get: ``` original data: [-- 2.0] data from nc read by xr: [9.96920997e+36 2.00000000e+00] data from xr read by xr: [nan 2.] data from nc read by nc: [nan 2.] data from xr read by nc: [nan 2.] done ``` Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2478/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
363326726 | MDU6SXNzdWUzNjMzMjY3MjY= | 2437 | xarray potential inconstistencies with cftime | sbiner 16655388 | closed | 0 | 16 | 2018-09-24T21:25:46Z | 2021-06-22T17:01:35Z | 2019-02-08T15:05:38Z | NONE | I am trying to use xarray with different types of calendars. I made a few tests and wonder if somebody can help me make sense of the results. In my test, I generate a DataArray Code Sample, a copy-pastable example if possible```python import xarray as xr import cftime import numpy as np generate data for 365_days calendarunits = 'days since 2000-01-01 00:00' time_365 = cftime.num2date(np.arange(0, 10 * 365), units, '365_day') da = xr.DataArray(np.arange(time_365.size), coords = [time_365], dims = 'time', name = 'data') write dataArray in netcdf and read it in new DataArrayda.to_netcdf('data_365.nc', 'w') da2 = xr.open_dataset('data_365.nc').data try resample datry: mean = da.resample(time='Y').mean() print(mean.values) except TypeError: print('got TypeError for da') try resample da2mean = da2.resample(time = 'Y').mean() print (mean.values) ``` Problem descriptionAs seen in the code the resampledoes not work for I wonder if this makes sense or if it is something that should eventually be corrected. INSTALLED VERSIONS In [6]: print (cftime.version) 1.0.1
------------------
commit: None
python: 3.6.5.final.0
python-bits: 64
OS: Darwin
OS-release: 17.7.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: fr_CA.UTF-8
LOCALE: fr_CA.UTF-8
xarray: 0.10.8
pandas: 0.23.0
numpy: 1.14.3
scipy: 1.1.0
netCDF4: 1.4.1
h5netcdf: None
h5py: 2.7.1
Nio: None
zarr: None
bottleneck: 1.2.1
cyordereddict: None
dask: 0.17.5
distributed: 1.21.8
matplotlib: 2.2.2
cartopy: None
seaborn: 0.8.1
setuptools: 39.1.0
pip: 10.0.1
conda: 4.5.11
pytest: 3.5.1
IPython: 6.4.0
sphinx: 1.7.4 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2437/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
326553877 | MDU6SXNzdWUzMjY1NTM4Nzc= | 2187 | open_dataset crash with long filenames | sbiner 16655388 | closed | 0 | 2 | 2018-05-25T14:47:31Z | 2018-05-29T14:43:50Z | 2018-05-29T14:42:35Z | NONE | Code Sample```python import xarray as xr import shutil import numpy as np create netcdf filedata = np.random.rand(4, 3) foo = xr.DataArray(data) foo.to_netcdf('test.nc') f_nc = 'a.nc' shutil.copy('test.nc', f_nc) while 1: print '{:05n} characteres'.format(len(f_nc)) ds1 = xr.open_dataset(f_nc) ds1.close() nf_nc = 'a' + f_nc shutil.move(f_nc, nf_nc) f_nc = nf_nc
``` Problem descriptionOn my linux machine (CentOS) this code crashes (memory corrruption) when the filename length hits 32 characters. On my OSX machine it is fine until 255 character and stops with an IOError Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/2187/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);