issues
5 rows where state = "open", type = "issue" and user = 15331990 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date)
| id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 415774106 | MDU6SXNzdWU0MTU3NzQxMDY= | 2795 | Add "unique()" method, mimicking pandas | ahuang11 15331990 | open | 0 | 6 | 2019-02-28T18:58:15Z | 2024-01-08T17:31:30Z | CONTRIBUTOR | Would it be good to add a unique() method that mimics pandas?
Output:
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/2795/reactions",
"total_count": 10,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue | ||||||||
| 1052753606 | I_kwDOAMm_X84-v77G | 5985 | Formatting data array as strings? | ahuang11 15331990 | open | 0 | 7 | 2021-11-13T19:29:02Z | 2023-03-17T13:10:06Z | CONTRIBUTOR | https://github.com/pydata/xarray/discussions/5865#discussioncomment-1636647 I wonder if it's possible to implement a built-in function like:
To wrap: ``` import xarray as xr da = xr.DataArray([5., 6., 7.]) das = xr.DataArray("%.2f") das.str % da <xarray.DataArray (dim_0: 3)> array(['5.00', '6.00', '7.00'], dtype='<U4') Dimensions without coordinates: dim_0 ``` |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/5985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue | ||||||||
| 816540158 | MDU6SXNzdWU4MTY1NDAxNTg= | 4958 | to_zarr mode='a-', append_dim; if dim value exists raise error | ahuang11 15331990 | open | 0 | 1 | 2021-02-25T15:26:02Z | 2022-04-09T15:19:28Z | CONTRIBUTOR | If I have a ds with time, lat, lon and I call the same command twice:
Kind of like: ```python import numpy as np import xarray as xr ds = xr.tutorial.open_dataset('air_temperature') ds.to_zarr('test_air.zarr', append_dim='time') ds_tmp = xr.open_mfdataset('test_air.zarr', engine='zarr') overlap = np.intersect1d(ds['time'], ds_tmp['time']) if len(overlap) > 1: raise ValueError(f'Found overlapping values in datasets {overlap}') ds.to_zarr('test_air.zarr', append_dim='time') ``` |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/4958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue | ||||||||
| 809708107 | MDU6SXNzdWU4MDk3MDgxMDc= | 4917 | Comparing against datetime.datetime and pd.Timestamp | ahuang11 15331990 | open | 0 | 1 | 2021-02-16T22:54:39Z | 2021-03-25T22:18:08Z | CONTRIBUTOR | Not sure if exactly bug and what performance implications there are but it'd be more user friendly if supported: 1.) comparing against datetime
2.) pd.Timestamp
This works though when converting to np.datetime64
|
{
"url": "https://api.github.com/repos/pydata/xarray/issues/4917/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue | ||||||||
| 743165216 | MDU6SXNzdWU3NDMxNjUyMTY= | 4587 | ffill with datetime64 errors | ahuang11 15331990 | open | 0 | 1 | 2020-11-15T02:38:39Z | 2020-11-15T14:23:19Z | CONTRIBUTOR |
``` ~/anaconda3/envs/py3/lib/python3.7/site-packages/xarray/core/computation.py in apply_variable_ufunc(func, signature, exclude_dims, dask, output_dtypes, vectorize, keep_attrs, dask_gufunc_kwargs, args) 698 ) 699 --> 700 result_data = func(input_data) 701 702 if signature.num_outputs == 1: ~/anaconda3/envs/py3/lib/python3.7/site-packages/bottleneck/slow/nonreduce_axis.py in push(a, n, axis) 49 elif ndim == 0: 50 return y ---> 51 fidx = ~np.isnan(y) 52 recent = np.empty(y.shape[:-1]) 53 count = np.empty(y.shape[:-1]) TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' ``` |
{
"url": "https://api.github.com/repos/pydata/xarray/issues/4587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] (
[id] INTEGER PRIMARY KEY,
[node_id] TEXT,
[number] INTEGER,
[title] TEXT,
[user] INTEGER REFERENCES [users]([id]),
[state] TEXT,
[locked] INTEGER,
[assignee] INTEGER REFERENCES [users]([id]),
[milestone] INTEGER REFERENCES [milestones]([id]),
[comments] INTEGER,
[created_at] TEXT,
[updated_at] TEXT,
[closed_at] TEXT,
[author_association] TEXT,
[active_lock_reason] TEXT,
[draft] INTEGER,
[pull_request] TEXT,
[body] TEXT,
[reactions] TEXT,
[performed_via_github_app] TEXT,
[state_reason] TEXT,
[repo] INTEGER REFERENCES [repos]([id]),
[type] TEXT
);
CREATE INDEX [idx_issues_repo]
ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
ON [issues] ([user]);