issues
3 rows where state = "closed" and user = 10676434 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
614275938 | MDU6SXNzdWU2MTQyNzU5Mzg= | 4045 | Millisecond precision is lost on datetime64 during IO roundtrip | half-adder 10676434 | closed | 0 | 9 | 2020-05-07T19:01:18Z | 2021-01-03T23:39:04Z | 2021-01-03T23:39:04Z | NONE | I have millisecond-resolution time data as a coordinate on a DataArray. That data loses precision when round-tripping through disk. MCVE Code SampleUnzip the data. It will result in a pickle file. ```python bug_data_path = '/path/to/unzipped/bug_data.p' tmp_path = '~/Desktop/test.nc' with open(bug_data_path, 'rb') as f: data = pickle.load(f) selector = dict(animal=0, timepoint=0, wavelength='410', pair=0) before_disk_ts = data.time.sel(**selector).values[()] data.time.encoding = {'units': 'microseconds since 1900-01-01', 'calendar': 'proleptic_gregorian'} data.to_netcdf(tmp_path) after_disk_ts = xr.load_dataarray(tmp_path).time.sel(**selector).values[()] print(f'before roundtrip: {before_disk_ts}')
print(f' after roundtrip: {after_disk_ts}')
Expected Output
Problem DescriptionAs you can see, I lose millisecond precision in this data. (The same happens when I use millisecond in the encoding). VersionsOutput of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.7.6 | packaged by conda-forge | (default, Jan 7 2020, 22:05:27) [Clang 9.0.1 ] python-bits: 64 OS: Darwin OS-release: 19.4.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: None.UTF-8 libhdf5: 1.10.5 libnetcdf: 4.7.3 xarray: 0.15.1 pandas: 1.0.1 numpy: 1.18.1 scipy: 1.4.1 netCDF4: 1.5.3 pydap: None h5netcdf: 0.8.0 h5py: 2.10.0 Nio: None zarr: None cftime: 1.0.4.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2.11.0 distributed: 2.14.0 matplotlib: 3.1.3 cartopy: None seaborn: 0.10.0 numbagg: None setuptools: 45.2.0.post20200209 pip: 20.0.2 conda: None pytest: 5.3.5 IPython: 7.12.0 sphinx: 2.4.3 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4045/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
613717463 | MDU6SXNzdWU2MTM3MTc0NjM= | 4042 | DataArray coordinates transformed into variables when saved to disk | half-adder 10676434 | closed | 0 | 1 | 2020-05-07T01:51:36Z | 2020-05-07T18:42:52Z | 2020-05-07T18:42:52Z | NONE | When I save my DataArray to disk using Also, the attributes have disappeared. MCVE Code SampleUnzip the file. It should be a pickle file. ```python import pickle import xarray as xr data = pickle.load(open('/path/to/bug_data.p', 'rb')) print(type(data) == xr.DataArray) >>> Truepath = '/var/tmp/bug_data.nc' data.to_netcdf(path, format='NETCDF4', mode='w') xr.open_dataarray(path) # fails due to multiple variablesxr.open_dataset(path) # succeeds
```
Expected OutputI expect Problem DescriptionThe DataArray -> Disk -> DataArray roundtrip should be seamless. VersionsOutput of <tt>xr.show_versions()</tt>INSTALLED VERSIONS ------------------ commit: None python: 3.7.6 | packaged by conda-forge | (default, Jan 7 2020, 22:05:27) [Clang 9.0.1 ] python-bits: 64 OS: Darwin OS-release: 19.4.0 machine: x86_64 processor: i386 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: en_US.ISO8859-1 libhdf5: 1.10.5 libnetcdf: 4.7.3 xarray: 0.15.1 pandas: 1.0.1 numpy: 1.18.1 scipy: 1.4.1 netCDF4: 1.5.3 pydap: None h5netcdf: 0.8.0 h5py: 2.10.0 Nio: None zarr: None cftime: 1.0.4.2 nc_time_axis: None PseudoNetCDF: None rasterio: None cfgrib: None iris: None bottleneck: None dask: 2.11.0 distributed: 2.14.0 matplotlib: 3.1.3 cartopy: None seaborn: 0.10.0 numbagg: None setuptools: 45.2.0.post20200209 pip: 20.0.2 conda: None pytest: 5.3.5 IPython: 7.12.0 sphinx: 2.4.3 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4042/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
605717342 | MDU6SXNzdWU2MDU3MTczNDI= | 3997 | DataArray.to_netcdf breaks in filename-dependent manner | half-adder 10676434 | closed | 0 | 3 | 2020-04-23T17:31:08Z | 2020-05-07T02:10:44Z | 2020-05-07T02:10:44Z | NONE | Writing a DataArray to disk breaks if the filename is MCVE Code Sample```python load data into
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3997/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);