issues
2 rows where type = "issue" and user = 64479100 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1414669747 | I_kwDOAMm_X85UUiWz | 7186 | netCDF4: support byte strings as attribute values | krihabu 64479100 | open | 0 | 2 | 2022-10-19T09:58:04Z | 2023-01-17T18:30:20Z | NONE | What is your issue?When I have a string attribute with special characters like '°' or German Umlauts (Ä, Ü, etc) it will get written to file as type NC_STRING. Other string attributes not containing any special characters will be saved as NC_CHAR. This leads to problems when I subsequently want to open this file with NetCDF-Fortran, because it does not fully support NC_STRING. So my question is: Is there a way to force xarray to write the string attribute as NC_CHAR? Example ```python import numpy as np import xarray as xr data = np.ones([12, 10])
ds = xr.Dataset({"data": (["x", "y"], data)}, coords={"x": np.arange(12), "y": np.arange(10)})
ds["x"].attrs["first_str"] = "foo"
ds["x"].attrs["second_str"] = "bar°"
ds["x"].attrs["third_str"] = "hää"
ds.to_netcdf("testds.nc")
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7186/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1516234283 | I_kwDOAMm_X85aX-Yr | 7406 | Grid mapping not saved in attributes when extra encoding is specified | krihabu 64479100 | open | 0 | 1 | 2023-01-02T10:17:07Z | 2023-01-15T15:46:55Z | NONE | What is your issue?When I will use the following NetCDF file to illustrate the issue:
With the code
But when I save with extra encoding information fot the data variable I suspect this is because of the possibility that the previous encoding might become invalid when the dataset is changed or encoded differently? But this makes it quite hard not to lose the grid mapping info, which should normally not be affected by data type, fill value or similar. I use python 3.10.5 and xarray 2022.3.0. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7406/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);