issues
25 rows where state = "open" and user = 14808389 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: comments, updated_at, draft, created_at (date), updated_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2194953062 | PR_kwDOAMm_X85qFqp1 | 8854 | array api-related upstream-dev failures | keewis 14808389 | open | 0 | 15 | 2024-03-19T13:17:09Z | 2024-05-03T22:46:41Z | MEMBER | 0 | pydata/xarray/pulls/8854 |
This "fixes" the upstream-dev failures related to the removal of |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
2269295936 | PR_kwDOAMm_X85uBwtv | 8983 | fixes for the `pint` tests | keewis 14808389 | open | 0 | 0 | 2024-04-29T15:09:28Z | 2024-05-03T18:30:06Z | MEMBER | 0 | pydata/xarray/pulls/8983 | This removes the use of the deprecated |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8983/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
2234142680 | PR_kwDOAMm_X85sK0g8 | 8923 | `"source"` encoding for datasets opened from `fsspec` objects | keewis 14808389 | open | 0 | 5 | 2024-04-09T19:12:45Z | 2024-04-23T16:54:09Z | MEMBER | 0 | pydata/xarray/pulls/8923 | When opening files from path-like objects ( In this PR, I'm extracting the If this sounds like a good idea, I'll update the documentation of the
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8923/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
2241492018 | PR_kwDOAMm_X85skF_A | 8937 | drop support for `python=3.9` | keewis 14808389 | open | 0 | 3 | 2024-04-13T10:18:04Z | 2024-04-15T15:07:39Z | MEMBER | 0 | pydata/xarray/pulls/8937 | According to our policy (and NEP-29) we can drop support for We could delay this until we have a release that is compatible with
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8937/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
2079089277 | I_kwDOAMm_X8577GJ9 | 8607 | allow computing just a small number of variables | keewis 14808389 | open | 0 | 4 | 2024-01-12T15:21:27Z | 2024-01-12T20:20:29Z | MEMBER | Is your feature request related to a problem?I frequently find myself computing a handful of variables of a dataset (typically coordinates) and assigning them back to the dataset, and wishing we had a method / function that allowed that. Describe the solution you'd likeI'd imagine something like
Describe alternatives you've consideredSo far I've been using something like
Additional contextNo response |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/8607/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1655290694 | I_kwDOAMm_X85iqbtG | 7721 | `as_shared_dtype` converts scalars to 0d `numpy` arrays if chunked `cupy` is involved | keewis 14808389 | open | 0 | 7 | 2023-04-05T09:48:34Z | 2023-12-04T10:45:43Z | MEMBER | I tried to run TypeError Traceback (most recent call last) Cell In[4], line 1 ----> 1 arr.chunk().where(mask).compute() File ~/repos/xarray/xarray/core/dataarray.py:1095, in DataArray.compute(self, kwargs) 1076 """Manually trigger loading of this array's data from disk or a 1077 remote source into memory and return a new array. The original is 1078 left unaltered. (...) 1092 dask.compute 1093 """ 1094 new = self.copy(deep=False) -> 1095 return new.load(kwargs) File ~/repos/xarray/xarray/core/dataarray.py:1069, in DataArray.load(self, kwargs) 1051 def load(self: T_DataArray, kwargs) -> T_DataArray: 1052 """Manually trigger loading of this array's data from disk or a 1053 remote source into memory and return this array. 1054 (...) 1067 dask.compute 1068 """ -> 1069 ds = self._to_temp_dataset().load(**kwargs) 1070 new = self._from_temp_dataset(ds) 1071 self._variable = new._variable File ~/repos/xarray/xarray/core/dataset.py:752, in Dataset.load(self, kwargs) 749 import dask.array as da 751 # evaluate all the dask arrays simultaneously --> 752 evaluated_data = da.compute(*lazy_data.values(), kwargs) 754 for k, data in zip(lazy_data, evaluated_data): 755 self.variables[k].data = data File ~/.local/opt/mambaforge/envs/xarray/lib/python3.10/site-packages/dask/base.py:600, in compute(traverse, optimize_graph, scheduler, get, args, kwargs) 597 keys.append(x.dask_keys()) 598 postcomputes.append(x.dask_postcompute()) --> 600 results = schedule(dsk, keys, kwargs) 601 return repack([f(r, a) for r, (f, a) in zip(results, postcomputes)]) File ~/.local/opt/mambaforge/envs/xarray/lib/python3.10/site-packages/dask/threaded.py:89, in get(dsk, keys, cache, num_workers, pool, kwargs) 86 elif isinstance(pool, multiprocessing.pool.Pool): 87 pool = MultiprocessingPoolExecutor(pool) ---> 89 results = get_async( 90 pool.submit, 91 pool._max_workers, 92 dsk, 93 keys, 94 cache=cache, 95 get_id=_thread_get_id, 96 pack_exception=pack_exception, 97 kwargs, 98 ) 100 # Cleanup pools associated to dead threads 101 with pools_lock: File ~/.local/opt/mambaforge/envs/xarray/lib/python3.10/site-packages/dask/local.py:511, in get_async(submit, num_workers, dsk, result, cache, get_id, rerun_exceptions_locally, pack_exception, raise_exception, callbacks, dumps, loads, chunksize, **kwargs) 509 _execute_task(task, data) # Re-execute locally 510 else: --> 511 raise_exception(exc, tb) 512 res, worker_id = loads(res_info) 513 state["cache"][key] = res File ~/.local/opt/mambaforge/envs/xarray/lib/python3.10/site-packages/dask/local.py:319, in reraise(exc, tb) 317 if exc.traceback is not tb: 318 raise exc.with_traceback(tb) --> 319 raise exc File ~/.local/opt/mambaforge/envs/xarray/lib/python3.10/site-packages/dask/local.py:224, in execute_task(key, task_info, dumps, loads, get_id, pack_exception) 222 try: 223 task, data = loads(task_info) --> 224 result = _execute_task(task, data) 225 id = get_id() 226 result = dumps((result, id)) File ~/.local/opt/mambaforge/envs/xarray/lib/python3.10/site-packages/dask/core.py:119, in _execute_task(arg, cache, dsk) 115 func, args = arg[0], arg[1:] 116 # Note: Don't assign the subtask results to a variable. numpy detects 117 # temporaries by their reference count and can execute certain 118 # operations in-place. --> 119 return func(*(_execute_task(a, cache) for a in args)) 120 elif not ishashable(arg): 121 return arg File ~/.local/opt/mambaforge/envs/xarray/lib/python3.10/site-packages/dask/optimization.py:990, in SubgraphCallable.call(self, *args) 988 if not len(args) == len(self.inkeys): 989 raise ValueError("Expected %d args, got %d" % (len(self.inkeys), len(args))) --> 990 return core.get(self.dsk, self.outkey, dict(zip(self.inkeys, args))) File ~/.local/opt/mambaforge/envs/xarray/lib/python3.10/site-packages/dask/core.py:149, in get(dsk, out, cache) 147 for key in toposort(dsk): 148 task = dsk[key] --> 149 result = _execute_task(task, cache) 150 cache[key] = result 151 result = _execute_task(out, cache) File ~/.local/opt/mambaforge/envs/xarray/lib/python3.10/site-packages/dask/core.py:119, in _execute_task(arg, cache, dsk) 115 func, args = arg[0], arg[1:] 116 # Note: Don't assign the subtask results to a variable. numpy detects 117 # temporaries by their reference count and can execute certain 118 # operations in-place. --> 119 return func(*(_execute_task(a, cache) for a in args)) 120 elif not ishashable(arg): 121 return arg File <array_function internals>:180, in where(args, *kwargs) File cupy/_core/core.pyx:1723, in cupy._core.core._ndarray_base.array_function() File ~/.local/opt/mambaforge/envs/xarray/lib/python3.10/site-packages/cupy/_sorting/search.py:211, in where(condition, x, y) 209 if fusion._is_fusing(): 210 return fusion._call_ufunc(_where_ufunc, condition, x, y) --> 211 return _where_ufunc(condition.astype('?'), x, y) File cupy/_core/_kernel.pyx:1287, in cupy._core._kernel.ufunc.call() File cupy/_core/_kernel.pyx:160, in cupy._core._kernel._preprocess_args() File cupy/_core/_kernel.pyx:146, in cupy._core._kernel._preprocess_arg() TypeError: Unsupported type <class 'numpy.ndarray'>
I think the reason is that this: https://github.com/pydata/xarray/blob/d4db16699f30ad1dc3e6861601247abf4ac96567/xarray/core/duck_array_ops.py#L195 is not sufficient to detect cc @jacobtomlinson |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/7721/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1158378382 | I_kwDOAMm_X85FC3OO | 6323 | propagation of `encoding` | keewis 14808389 | open | 0 | 8 | 2022-03-03T12:57:29Z | 2023-10-25T23:20:31Z | MEMBER | What is your issue?We frequently get bug reports related to There are also a few discussions with more background: - https://github.com/pydata/xarray/pull/5065#issuecomment-806154872 - https://github.com/pydata/xarray/issues/1614 - #5082 - #5336 We discussed this in the meeting yesterday and as far as I can remember agreed that the current default behavior is not ideal and decided to investigate #5336: a cc @rabernat, @shoyer |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6323/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
683142059 | MDU6SXNzdWU2ODMxNDIwNTk= | 4361 | restructure the contributing guide | keewis 14808389 | open | 0 | 5 | 2020-08-20T22:51:39Z | 2023-03-31T17:39:00Z | MEMBER | From #4355 @max-sixty:
We could also add a docstring guide since the |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4361/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
1306795760 | I_kwDOAMm_X85N5B7w | 6793 | improve docstrings with examples and links | keewis 14808389 | open | 0 | 10 | 2022-07-16T12:30:33Z | 2023-03-24T16:33:28Z | MEMBER | This is a (incomplete) checklist for #5816 to make it easier to find methods that are in need of examples and links to the narrative docs with further information (of course, changes to the docstrings of all other methods / functions part of the public API are also appreciated). Good examples explicitly construct small xarray objects to make it easier to follow (e.g. use Use
To easily generate the expected output install To link to other documentation pages we can use
Top-level functions:
- [ ] I/O:
- [ ] Contents:
- [ ] Comparisons:
- [ ] Dask:
- [ ] Missing values:
- [ ] Indexing:
- [ ] Aggregations:
- [ ] |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6793/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
818059250 | MDExOlB1bGxSZXF1ZXN0NTgxNDIzNTIx | 4972 | Automatic duck array testing - reductions | keewis 14808389 | open | 0 | 23 | 2021-02-27T23:57:23Z | 2022-08-16T13:47:05Z | MEMBER | 1 | pydata/xarray/pulls/4972 | This is the first of a series of PRs to add a framework to make testing the integration of duck arrays as simple as possible. It uses
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4972/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
532696790 | MDU6SXNzdWU1MzI2OTY3OTA= | 3594 | support for units with pint | keewis 14808389 | open | 0 | 7 | 2019-12-04T13:49:28Z | 2022-08-03T11:44:05Z | MEMBER |
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3594/reactions", "total_count": 14, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 14, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
597566530 | MDExOlB1bGxSZXF1ZXN0NDAxNjU2MTc1 | 3960 | examples for special methods on accessors | keewis 14808389 | open | 0 | 6 | 2020-04-09T21:34:30Z | 2022-06-09T14:50:17Z | MEMBER | 0 | pydata/xarray/pulls/3960 | This starts adding the parametrized accessor examples from #3829 to the accessor documentation as suggested by @jhamman. Since then the Also, this feature can be abused to add functions to the main (~When trying to build the docs locally, sphinx keeps complaining about a code block without code. Not sure what that is about~ seems the
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3960/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
801728730 | MDExOlB1bGxSZXF1ZXN0NTY3OTkzOTI3 | 4863 | apply to dataset | keewis 14808389 | open | 0 | 14 | 2021-02-05T00:05:22Z | 2022-06-09T14:50:17Z | MEMBER | 0 | pydata/xarray/pulls/4863 | as discussed in #4837, this adds a method that applies a function to a This function is really similar to
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4863/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
959063390 | MDExOlB1bGxSZXF1ZXN0NzAyMjM0ODc1 | 5668 | create the context objects passed to custom `combine_attrs` functions | keewis 14808389 | open | 0 | 1 | 2021-08-03T12:24:50Z | 2022-06-09T14:50:16Z | MEMBER | 0 | pydata/xarray/pulls/5668 | Follow-up to #4896: this creates the context object in reduce methods and passes it to
Note that for now this is a bit inconvenient to use for provenance tracking (as discussed in the |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5668/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | pull | ||||||
1265366275 | I_kwDOAMm_X85La_UD | 6678 | exception groups | keewis 14808389 | open | 0 | 1 | 2022-06-08T22:09:37Z | 2022-06-08T23:38:28Z | MEMBER | What is your issue?As I mentioned in the meeting today, we have a lot of features where the the exception group support from PEP654 (which is scheduled for python 3.11 and consists of the class and a syntax change) might be useful. For example, we might want to collect all errors raised by For |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6678/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
624778130 | MDU6SXNzdWU2MjQ3NzgxMzA= | 4095 | merging non-dimension coordinates with the Dataset constructor | keewis 14808389 | open | 0 | 1 | 2020-05-26T10:30:37Z | 2022-04-19T13:54:43Z | MEMBER | When adding two This fails: ```python In [1]: import xarray as xr ...: import numpy as np In [2]: a = np.linspace(0, 1, 10)
...: b = np.linspace(-1, 0, 12)
...:
...: x_a = np.arange(10)
...: x_b = np.arange(12)
...:
...: y_a = x_a * 1000
...: y_b = x_b * 1000
...:
...: arr1 = xr.DataArray(data=a, coords={"x": x_a, "y": ("x", y_a)}, dims="x")
...: arr2 = xr.DataArray(data=b, coords={"x": x_b, "y": ("x", y_b)}, dims="x")
...:
...: xr.Dataset({"a": arr1, "b": arr2})
...
MergeError: conflicting values for variable 'y' on objects to be combined. You can skip this check by specifying compat='override'.
I can work around this by calling:
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4095/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
539181896 | MDU6SXNzdWU1MzkxODE4OTY= | 3638 | load_store and dump_to_store | keewis 14808389 | open | 0 | 1 | 2019-12-17T16:37:53Z | 2021-11-08T21:11:26Z | MEMBER | Continuing from #3602, what should we do with these? Are they obsolete and should be removed or just unmaintained (then we should properly document and test them). |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3638/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
789106802 | MDU6SXNzdWU3ODkxMDY4MDI= | 4825 | clean up the API for renaming and changing dimensions / coordinates | keewis 14808389 | open | 0 | 5 | 2021-01-19T15:11:55Z | 2021-09-10T15:04:14Z | MEMBER | From #4108: I wonder if it would be better to first "reorganize" all of the existing functions: we currently have I believe we currently have these use cases (not sure if that list is complete, though):
- rename a Sometimes, some of these can be emulated by combinations of others, for example: ```python x is a dimension without coordinatesassert_identical(ds.set_index({"x": "b"}), ds.swap_dims({"x": "b"}).rename({"b": "x"}))
assert_identical(ds.swap_dims({"x": "b"}), ds.set_index({"x": "b"}).rename({"x": "b"}))
In any case I think we should add a guide which explains which method to pick in which situation (or extend Originally posted by @keewis in https://github.com/pydata/xarray/issues/4108#issuecomment-761907785 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4825/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
935531700 | MDU6SXNzdWU5MzU1MzE3MDA= | 5562 | hooks to "prepare" xarray objects for plotting | keewis 14808389 | open | 0 | 6 | 2021-07-02T08:14:02Z | 2021-07-04T08:46:34Z | MEMBER | From https://github.com/xarray-contrib/pint-xarray/pull/61#discussion_r662485351
While this is pretty neat there are some issues:
- All of this makes me wonder: should we try to maintain our own mapping of hooks which "prepare" the object based on the data's type? My initial idea would be that the hook function receives a For example for xref #5561 |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/5562/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
589850951 | MDU6SXNzdWU1ODk4NTA5NTE= | 3917 | running numpy functions on xarray objects | keewis 14808389 | open | 0 | 1 | 2020-03-29T18:17:29Z | 2021-07-04T02:00:22Z | MEMBER | In the Some of these functions, like Should we define |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3917/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
674445594 | MDU6SXNzdWU2NzQ0NDU1OTQ= | 4321 | push inline formatting functions upstream | keewis 14808389 | open | 0 | 0 | 2020-08-06T16:35:04Z | 2021-04-19T03:20:11Z | MEMBER | 4248 added a
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4321/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
675342733 | MDU6SXNzdWU2NzUzNDI3MzM= | 4324 | constructing nested inline reprs | keewis 14808389 | open | 0 | 9 | 2020-08-07T23:25:31Z | 2021-04-19T03:20:01Z | MEMBER | While implementing the new From that PR: @keewis
@jthielen
How should we deal with this? |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
791277757 | MDU6SXNzdWU3OTEyNzc3NTc= | 4837 | expose _to_temp_dataset / _from_temp_dataset as semi-public API? | keewis 14808389 | open | 0 | 5 | 2021-01-21T16:11:32Z | 2021-01-22T02:07:08Z | MEMBER | When writing accessors which behave the same for both Otherwise I guess it would be possible to use
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/4837/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
552896124 | MDU6SXNzdWU1NTI4OTYxMjQ= | 3711 | PseudoNetCDF tests failing randomly | keewis 14808389 | open | 0 | 6 | 2020-01-21T14:01:49Z | 2020-03-23T20:32:32Z | MEMBER | The self = <xarray.tests.test_backends.TestPseudoNetCDFFormat object at 0x000002E11FF2DC08>
xarray\tests\test_backends.py:3532: xarray\core\formatting.py:628: in diff_dataset_repr summary.append(diff_attrs_repr(a.attrs, b.attrs, compat)) a_mapping = {'CPROJ': 0, 'FILEDESC': 'CAMx ', 'FTYPE': 1, 'GDNAM': 'CAMx ', ...} b_mapping = {'CPROJ': 0, 'FILEDESC': 'CAMx ', 'FTYPE': 1, 'GDNAM': 'CAMx ', ...} compat = 'identical', title = 'Attributes' summarizer = <function summarize_attr at 0x000002E1156813A8>, col_width = None
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3711/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
517195073 | MDU6SXNzdWU1MTcxOTUwNzM= | 3483 | assign_coords with mixed DataArray / array args removes coords | keewis 14808389 | open | 0 | 5 | 2019-11-04T14:38:40Z | 2019-11-07T15:46:15Z | MEMBER | I'm not sure if using
I would expect the result to be the same regardless of the type of the new coords. |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);