home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

7 rows where comments = 5, type = "issue" and user = 14808389 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

state 2

  • open 4
  • closed 3

type 1

  • issue · 7 ✖

repo 1

  • xarray 7
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
683142059 MDU6SXNzdWU2ODMxNDIwNTk= 4361 restructure the contributing guide keewis 14808389 open 0     5 2020-08-20T22:51:39Z 2023-03-31T17:39:00Z   MEMBER      

From #4355

@max-sixty:

Stepping back on the contributing doc — I admit I haven't look at it in a while — I wonder whether we can slim it down a bit, for example by linking to other docs for generic tooling — I imagine we're unlikely to have the best docs on working with GH, for example. Or referencing our PR template rather than the (now out-of-date) PR checklist.

We could also add a docstring guide since the numpydoc guide does not cover every little detail (for example, default notation, type spec vs. type hint, space before the colon separating parameter names from types, no colon for parameters without types, etc.)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4361/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
628436420 MDU6SXNzdWU2Mjg0MzY0MjA= 4116 xarray ufuncs keewis 14808389 closed 0     5 2020-06-01T13:25:54Z 2022-04-19T03:26:53Z 2022-04-19T03:26:53Z MEMBER      

The documentation warns that the universal functions in xarray.ufuncs should not be used unless compatibility with numpy < 1.13 is required.

Since we only support numpy >= 1.15: is it time to remove that (already deprecated) module?

Since there are also functions that are not true ufuncs (e.g. np.angle and np.median) and need __array_function__ (or something similar, see #3917), we could also keep those and just remove the ones that are dispatched using __array_ufunc__.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4116/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
789106802 MDU6SXNzdWU3ODkxMDY4MDI= 4825 clean up the API for renaming and changing dimensions / coordinates keewis 14808389 open 0     5 2021-01-19T15:11:55Z 2021-09-10T15:04:14Z   MEMBER      

From #4108:

I wonder if it would be better to first "reorganize" all of the existing functions: we currently have rename (and Dataset.rename_dims / Dataset.rename_vars), set_coords, reset_coords, set_index, reset_index and swap_dims, which overlap partially. For example, the code sample from #4417 works if instead of python ds = ds.rename(b='x') ds = ds.set_coords('x') we use python ds = ds.set_index(x="b") and something similar for the code sample in #4107.

I believe we currently have these use cases (not sure if that list is complete, though): - rename a DataArray → rename - rename a existing variable to a name that is not yet in the object → rename / Dataset.rename_vars / Dataset.rename_dims - convert a data variable to a coordinate (not a dimension coordinate) → set_coords - convert a coordinate (not a dimension coordinate) to a data variable → reset_coords - swap a existing dimension coordinate with a coordinate (which may not exist) and rename the dimension → swap_dims - use a existing coordinate / data variable as a dimension coordinate (do not rename the dimension) → set_index - stop using a coordinate as dimension coordinate and append _ to its name (do not rename the dimension) → reset_index - use two existing coordinates / data variables as a MultiIndex → set_index - stop using a MultiIndex as a dimension coordinate and use its levels as coordinates → reset_index

Sometimes, some of these can be emulated by combinations of others, for example: ```python

x is a dimension without coordinates

assert_identical(ds.set_index({"x": "b"}), ds.swap_dims({"x": "b"}).rename({"b": "x"})) assert_identical(ds.swap_dims({"x": "b"}), ds.set_index({"x": "b"}).rename({"x": "b"})) and, with this PR:python assert_identical(ds.set_index({"x": "b"}), ds.set_coords("b").rename({"b": "x"})) assert_identical(ds.swap_dims({"x": "b"}), ds.rename({"b": "x"})) `` which means that it would increase the overlap ofrename,set_index, andswap_dims`.

In any case I think we should add a guide which explains which method to pick in which situation (or extend howdoi).

Originally posted by @keewis in https://github.com/pydata/xarray/issues/4108#issuecomment-761907785

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4825/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
791277757 MDU6SXNzdWU3OTEyNzc3NTc= 4837 expose _to_temp_dataset / _from_temp_dataset as semi-public API? keewis 14808389 open 0     5 2021-01-21T16:11:32Z 2021-01-22T02:07:08Z   MEMBER      

When writing accessors which behave the same for both Dataset and DataArray, it would be incredibly useful to be able to use DataArray._to_temp_dataset / DataArray._from_temp_dataset to deduplicate code. Is it safe to use those in external packages (like pint-xarray)?

Otherwise I guess it would be possible to use python name = da.name if da.name is None else "__temp" temp_ds = da.to_dataset(name=name) new_da = temp_ds[name] if da.name is None: new_da = new_da.rename(da.name) assert_identical(da, new_da) but that seems less efficient.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4837/reactions",
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
784628736 MDU6SXNzdWU3ODQ2Mjg3MzY= 4801 scheduled upstream-dev CI is skipped keewis 14808389 closed 0     5 2021-01-12T22:08:19Z 2021-01-14T00:09:43Z 2021-01-13T21:35:44Z MEMBER      

The scheduled CI is being skipped since about 6 days ago (which means this is probably due to the merge of #4729, see this run before the merge and this run after the merge).

This is really strange because workflow_dispatch events (for which the CI detection job is also skipped) still work perfectly fine.

Edit: it seems to be because of https://github.com/pydata/xarray/blob/f52a95cbe694336fe47bc5a42c713bee8ad74d64/.github/workflows/upstream-dev-ci.yaml#L34-L37 if I remove that the scheduled CI runs.

cc @andersy005

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4801/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
638211949 MDU6SXNzdWU2MzgyMTE5NDk= 4152 signature in accessor methods keewis 14808389 closed 0     5 2020-06-13T18:49:16Z 2020-09-06T23:05:04Z 2020-09-06T23:05:04Z MEMBER      

With the merge of #3988 we're now properly documenting the str, dt and plot accessors, but the signatures of the plotting methods are missing a few parameters. For example, DataArray.plot.contour is missing the parameter x (it's properly documented in the parameters section, though).

Also, we need to remove the str and dt accessors from Computing and try to figure out how to fix the summary of DataArray.plot (the plotting method).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4152/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
517195073 MDU6SXNzdWU1MTcxOTUwNzM= 3483 assign_coords with mixed DataArray / array args removes coords keewis 14808389 open 0     5 2019-11-04T14:38:40Z 2019-11-07T15:46:15Z   MEMBER      

I'm not sure if using assign_coords to overwrite the data of coords is the best way to do so, but using mixed args (on current master) turns out to have surprising results: ```python

obj = xr.DataArray( ... data=[6, 3, 4, 6], ... coords={"x": list("abcd"), "y": ("x", range(4))}, ... dims="x", ... ) obj <xarray.DataArray 'obj' (x: 4)> array([6, 3, 4, 6]) Coordinates: * x (x) <U1 'a' 'b' 'c' 'd' y (x) int64 0 1 2 3

works as expected

obj.assign_coords(coords={"x": list("efgh"), "y": ("x", [0, 2, 4, 6])}) <xarray.DataArray 'obj' (x: 4)> array([6, 3, 4, 6]) Coordinates: * x (x) <U1 'e' 'f' 'g' 'h' y (x) int64 0 2 4 6

works, too (same as .data / .values)

obj.assign_coords(coords={ ... "x": obj.x.copy(data=list("efgh")).variable, ... "y": ("x", [0, 2, 4, 6]), ... }) <xarray.DataArray 'obj' (x: 4)> array([6, 3, 4, 6]) Coordinates: * x (x) <U1 'e' 'f' 'g' 'h' y (x) int64 0 2 4 6

this drops "y"

obj.assign_coords(coords={ ... "x": obj.x.copy(data=list("efgh")), ... "y": ("x", [0, 2, 4, 6]), ... }) <xarray.DataArray 'obj' (x: 4)> array([6, 3, 4, 6]) Coordinates: * x (x) <U1 'e' 'f' 'g' 'h' Passing a `DataArray` for `y`, like `obj.y * 2` while also changing `x` (the type does not matter) always results in a `MergeError`:python obj.assign_coords(x=list("efgh"), y=obj.y * 2) xarray.core.merge.MergeError: conflicting values for index 'x' on objects to be combined: first value: Index(['e', 'f', 'g', 'h'], dtype='object', name='x') second value: Index(['a', 'b', 'c', 'd'], dtype='object', name='x') ```

I would expect the result to be the same regardless of the type of the new coords.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/3483/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 1511.399ms · About: xarray-datasette