issues
4 rows where user = 8419157 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
659142025 | MDU6SXNzdWU2NTkxNDIwMjU= | 4235 | Dataset plot line | DancingQuanta 8419157 | open | 0 | 5 | 2020-07-17T10:51:24Z | 2022-04-18T06:59:15Z | NONE | Is your feature request related to a problem? Please describe.
Describe the solution you'd like A new plotting method I suspect that as increase in number of dimensions from 1D to 2D for both variables and later to ND, there will be many ways to interpret the arguments given to the plotting function,. However, the fact the method name is Having seen the code for the |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4235/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
553672182 | MDU6SXNzdWU1NTM2NzIxODI= | 3715 | ValueError: buffer source array is read-only with apply_ufunc | DancingQuanta 8419157 | closed | 0 | 5 | 2020-01-22T17:00:23Z | 2022-04-18T02:18:47Z | 2022-04-18T02:18:47Z | NONE | I am trying to use MCVE Code Sample```python import numpy as np import xarray as xr from scipy.signal import find_peaks Generate waveformx = (np.sin(2np.pi(2*np.linspace(2,10,1000))np.arange(1000)/48000) + np.random.normal(0, 1, 1000) * 0.15) Find peaks non-xarray waypeaks, _ = find_peaks(x, prominence=1) print(peaks) Cast waveform to xr.DataArrayx = xr.DataArray(x, dims='time') Duplicate data along a new dimensionrep = xr.DataArray(range(11), dims='repeat') x = (x.broadcast_like(rep).assign_coords(repeat=rep)) def process_peaks(arr): # Apply find_peaks peaks, _ = find_peaks(arr, prominence=1) return peaks Apply function to arrayresults = xr.apply_ufunc( process_peaks, x, input_core_dims=[['time']], output_core_dims=[['peaks']], vectorize=True ) Should show repeats of peak resultsprint(results) ``` Expected OutputIn the MCvE above, there are two print statements, The first print out the results of the peak finding without involving xarray.
The second print statement prints out the Problem DescriptionThe function ValueError Traceback (most recent call last) <ipython-input-43-1a80f67560a6> in <module> 35 input_core_dims=[['time']], 36 output_core_dims=[['peaks']], ---> 37 vectorize=True 38 ) 39 c:\users\at17\repos\xarray\xarray\core\computation.py in apply_ufunc(func, input_core_dims, output_core_dims, exclude_dims, vectorize, join, dataset_join, dataset_fill_value, keep_attrs, kwargs, dask, output_dtypes, output_sizes, *args) 1040 join=join, 1041 exclude_dims=exclude_dims, -> 1042 keep_attrs=keep_attrs, 1043 ) 1044 elif any(isinstance(a, Variable) for a in args): c:\users\at17\repos\xarray\xarray\core\computation.py in apply_dataarray_vfunc(func, signature, join, exclude_dims, keep_attrs, args) 230 231 data_vars = [getattr(a, "variable", a) for a in args] --> 232 result_var = func(data_vars) 233 234 if signature.num_outputs > 1: c:\users\at17\repos\xarray\xarray\core\computation.py in apply_variable_ufunc(func, signature, exclude_dims, dask, output_dtypes, output_sizes, keep_attrs, args) 599 "apply_ufunc: {}".format(dask) 600 ) --> 601 result_data = func(input_data) 602 603 if signature.num_outputs == 1: ~.conda\envs\asi\lib\site-packages\numpy\lib\function_base.py in call(self, args, *kwargs) 2089 vargs.extend([kwargs[_n] for _n in names]) 2090 -> 2091 return self._vectorize_call(func=func, args=vargs) 2092 2093 def _get_ufunc_and_otypes(self, func, args): ~.conda\envs\asi\lib\site-packages\numpy\lib\function_base.py in _vectorize_call(self, func, args)
2155 """Vectorized call to ~.conda\envs\asi\lib\site-packages\numpy\lib\function_base.py in _vectorize_call_with_signature(self, func, args) 2196 2197 for index in np.ndindex(broadcast_shape): -> 2198 results = func((arg[index] for arg in args)) 2199 2200 n_results = len(results) if isinstance(results, tuple) else 1 <ipython-input-43-1a80f67560a6> in process_peaks(arr) 27 28 # Finally execute find_peaks ---> 29 peaks, _ = find_peaks(arr, prominence=1) 30 return peaks 31 ~.conda\envs\asi\lib\site-packages\scipy\signal_peak_finding.py in find_peaks(x, height, threshold, distance, prominence, width, wlen, rel_height, plateau_size)
937 raise ValueError(' _peak_finding_utils.pyx in scipy.signal._peak_finding_utils._local_maxima_1d() ~.conda\envs\asi\lib\site-packages\scipy\signal_peak_finding_utils.cp37-win_amd64.pyd in View.MemoryView.memoryview_cwrapper() ~.conda\envs\asi\lib\site-packages\scipy\signal_peak_finding_utils.cp37-win_amd64.pyd in View.MemoryView.memoryview.cinit() ValueError: buffer source array is read-only ``` I have a conversation with some people on Gitter about it and a workaround was proposed, adding this line of code before Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3715/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
603309899 | MDU6SXNzdWU2MDMzMDk4OTk= | 3985 | xarray=1.15.1 regression: Groupby drop multi-index | DancingQuanta 8419157 | closed | 0 | 4 | 2020-04-20T15:05:51Z | 2021-02-16T15:59:46Z | 2021-02-16T15:59:46Z | NONE | I have written a function MCVE Code Sample```python import xarray as xr DimensionsN = xr.DataArray(np.arange(100), dims='N', name='N') reps = xr.DataArray(np.arange(5), dims='reps', name='reps') horizon = xr.DataArray([1, -1], dims='horizon', name='horizon') horizon.attrs = {'long_name': 'Horizonal', 'units': 'H'} vertical = xr.DataArray(np.arange(1, 4), dims='vertical', name='vertical') vertical.attrs = {'long_name': 'Vertical', 'units': 'V'} Variablesx = xr.DataArray(np.random.randn(len(N), len(reps), len(horizon), len(vertical)), dims=['N', 'reps', 'horizon', 'vertical'], name='x') y = x * 0.1 y.name = 'y' Merge x, ydata = xr.merge([x, y]) Assign coordsdata = data.assign_coords(reps=reps, vertical=vertical, horizon=horizon) Function that stack all but one diensions and groupby over the stacked dimension.def process_stacked_groupby(ds, dim, func, *args):
Function to apply on groupbydef fn(ds): return ds Run groupby with applied functiondata.pipe(process_stacked_groupby, 'N', fn) ``` Expected OutputPrior to xarray=0.15.0, the above code produce a result that I wanted. The function should be able to 1. stack chosen dimensions 2. groupby the stacked dimension 3. apply a function on each group a. The function actually passes along another function with unstacked group coord b. Add multi-index stacked group coord back to the results of this function 4. combine the groups 5. Unstack stacked dimension Problem DescriptionAfter upgrading to 0.15.1, the above code stopped working.
The error occurred at the line
Versions0.15.1 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3985/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
553766743 | MDU6SXNzdWU1NTM3NjY3NDM= | 3716 | Feature request: apply_ufunc create dataset from multi-return function | DancingQuanta 8419157 | closed | 0 | 1 | 2020-01-22T20:10:43Z | 2020-01-25T00:00:12Z | 2020-01-25T00:00:12Z | NONE | I wanted to pass a list of variable names to I could create a PR but I need to ask due to the large possible arguments to and returns from |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3716/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);