issues
7 rows where user = 941907 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2275107296 | I_kwDOAMm_X86Hm2Hg | 8992 | (i)loc slicer specialization for convenient slicing by dimension label as `.loc('dim_name')[:n]` | smartass101 941907 | open | 0 | 0 | 2024-05-02T10:04:11Z | 2024-05-02T14:47:09Z | NONE | Is your feature request related to a problem?Until PEP 472, I'm sure we would all love to be able to do indexing with labeled dimension names inside brackets. Here I'm proposing a slightly modified syntax which is possible to implement and would be quite convenient IMHO. Describe the solution you'd likeThis is inspired by the Pandas
This accessor becomes especially convenient when you quickly want to index just one dimension such as
Describe alternatives you've consideredThe equivalent Additional contextThis |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8992/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
489034521 | MDU6SXNzdWU0ODkwMzQ1MjE= | 3279 | Feature request: vector cross product | smartass101 941907 | closed | 0 | 2 | 2019-09-04T09:05:41Z | 2021-12-29T07:54:37Z | 2021-12-29T07:54:37Z | NONE | xarray currently has the ```python def cross(a, b, spatial_dim, output_dtype=None): """xarray-compatible cross product
``` Example usage
Main questionDo you want such a function (and possibly associated I could make a PR if you'd want to have it in Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
181340410 | MDU6SXNzdWUxODEzNDA0MTA= | 1040 | DataArray.diff dim argument should be optional as is in docstring | smartass101 941907 | closed | 0 | 7 | 2016-10-06T07:14:50Z | 2020-03-28T18:18:21Z | 2020-03-28T18:18:21Z | NONE | The dosctring of |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1040/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
528701910 | MDU6SXNzdWU1Mjg3MDE5MTA= | 3574 | apply_ufunc with dask='parallelized' and vectorize=True fails on compute_meta | smartass101 941907 | closed | 0 | 12 | 2019-11-26T12:45:55Z | 2020-01-22T15:43:19Z | 2020-01-22T15:43:19Z | NONE | MCVE Code Sample```python import numpy as np import xarray as xr ds = xr.Dataset({ 'signal': (['das_time', 'das', 'record'], np.empty((1000, 120, 45))), 'min_height': (['das'], np.empty((120,))) # each DAS has a different resolution }) def some_peak_finding_func(data1d, min_height): """process data1d with contraints by min_height""" result = np.zeros((4,2)) # summary matrix with 2 peak characteristics return result ds_dask = ds.chunk({'record':3}) xr.apply_ufunc(some_peak_finding_func, ds_dask['signal'], ds_dask['min_height'], input_core_dims=[['das_time'], []], # apply peak finding along trace output_core_dims=[['peak_pos', 'pulse']], vectorize=True, # up to here works without dask! dask='parallelized', output_sizes={'peak_pos': 4, 'pulse':2}, output_dtypes=[np.float], ) ``` fails with Expected OutputThis should work and works well on the non-chunked ds, without Problem DescriptionI'm trying to parallelize a peak finding routine with dask (works well without it) and I hoped that https://github.com/dask/dask/blob/e6ba8f5de1c56afeaed05c39c2384cd473d7c893/dask/array/utils.py#L118 A possible solution might be for I know I could use groupby-apply as an alternative, but there are several issues that made us use Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3574/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
189998469 | MDU6SXNzdWUxODk5OTg0Njk= | 1130 | pipe, apply should call maybe_wrap_array, possibly resolve dim->axis | smartass101 941907 | closed | 0 | 6 | 2016-11-17T10:04:10Z | 2019-01-24T18:34:38Z | 2019-01-24T18:34:37Z | NONE | While I've often tried piping functions which at first looked like ufuncs only to find out that they forgot to call Since many such functions expect an 1) check if axis argument is a string and coerce it to a number, something like
Simple, but specifying 2) similar to 1., but only if both
Other coding might be possible. 3) use some syntax similar to Let me know what you think and perhaps you'll come up with some nicer syntax for dim-> axis resolution. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1130/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue | ||||||
190026722 | MDExOlB1bGxSZXF1ZXN0OTQxNTMzNjE= | 1131 | Fix #1040: diff dim argument should be optional | smartass101 941907 | closed | 0 | 2 | 2016-11-17T11:55:53Z | 2019-01-14T21:18:18Z | 2019-01-14T21:18:18Z | NONE | 0 | pydata/xarray/pulls/1131 |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1131/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | |||||
187373423 | MDU6SXNzdWUxODczNzM0MjM= | 1080 | acccessor extending approach limits functional programming approach, make direct monkey-patching also possible | smartass101 941907 | closed | 0 | 9 | 2016-11-04T16:06:34Z | 2016-12-06T10:44:16Z | 2016-12-06T10:44:16Z | NONE | Hi, thatnks for creating and continuing development of xarray. I'm in the process of converting my own functions and classes to it which did something very similar (label indexing, plotting, etc.) but was inferior in many ways. Right now I'm designing a set of functions for digital signal processing (I need them the most, though inteprolation is also important), mostly lowpass/highpass filters and spectrograms based on I agree that making sure that adding a method to the class does not overwrite something else is a good idea, but that can be done for single methods as well. It would be even possible to save replaced method somewhere and replace them later if requested. The great advantage is that the added methods can still be first-class functions as well. Such methods cannot save state as easily as accessor methods, but in many cases that is not necessary. I actually implemented something similar for my DataArray-like class (before xarray existed, now I'm trying to convert to ```python '''Module for handling various DataArray method plugins''' from xarray import DataArray from types import FunctionType map: name of patched method -> stack of previous methods_REPLACED_METHODS = {} def patch_dataarray(method_func): '''Sets method_func as a method of the DataArray class
def restore_method(method_func): '''Restore a previous version of a method of the DataArray class''' method_name = method_func.name try: method_stack = _REPLACED_METHODS[method_name] except KeyError: return # no previous method to restore previous_method = method_stack.pop(-1) if previous_method is None: delattr(DataArray, method_name) else: setattr(DataArray, method_name, previous_method) def unload_module_patches(module): '''Restore previous versions of methods found in the given module''' for name in dir(module): obj = getattr(module, name) if isinstance(obj, FunctionType): restore_method(obj) def patch_dataarray_wraps(func, func_name=None): '''Return a decorator that patches DataArray with the decorated function
``` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1080/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);