home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

19 rows where user = 28786187 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, reactions, created_at (date), updated_at (date)

issue 7

  • Increase default `display_max_rows` 7
  • bump minimum versions, drop py38 3
  • Deprecate inplace methods 2
  • dataset `__repr__` updates 2
  • Limit and format number of displayed dimensions in repr 2
  • DataArray.resample().apply() fails to apply custom function 2
  • Refactor nanops 1

user 1

  • st-bender · 19 ✖

author_association 1

  • CONTRIBUTOR 19
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1507176030 https://github.com/pydata/xarray/pull/7461#issuecomment-1507176030 https://api.github.com/repos/pydata/xarray/issues/7461 IC_kwDOAMm_X85Z1a5e st-bender 28786187 2023-04-13T15:30:28Z 2023-04-13T15:30:28Z CONTRIBUTOR

Hi,

I assume you have given this a lot of thought, but imho the minimum dependency versions should be decided according to features needed, not timing.

It's not based on timing. The policy is there so that, when a developer finds that they have to do extra labour to support an old version of a dependency, they can instead drop the support for the old version without needing to seek approval from the maintainers.

That's not how I interpret the link given by @dcherian, which states "rolling" minimum versions based on age.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bump minimum versions, drop py38 1550109629
1507150495 https://github.com/pydata/xarray/pull/7461#issuecomment-1507150495 https://api.github.com/repos/pydata/xarray/issues/7461 IC_kwDOAMm_X85Z1Uqf st-bender 28786187 2023-04-13T15:13:28Z 2023-04-13T15:13:28Z CONTRIBUTOR

Hi @dcherian

Here is our support policy for versions: https://docs.xarray.dev/en/stable/getting-started-guide/installing.html#minimum-dependency-versions though I think we dropped py38 too early.

I assume you have given this a lot of thought, but imho the minimum dependency versions should be decided according to features needed, not timing.

For your current issue, I'm surprised this patch didn't fix it: conda-forge/conda-forge-repodata-patches-feedstock#429

Thanks for the pointer. I am not sure why, maybe I was updating too eagerly before the feedstock was fixed, but mamba update --all on py38 pulled pandas 2.0 without updating xarray.

python3.8 -m pip install xarray will result in incompatible versions.

cc @hmaarrfk @ocefpaf

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bump minimum versions, drop py38 1550109629
1503393910 https://github.com/pydata/xarray/pull/7461#issuecomment-1503393910 https://api.github.com/repos/pydata/xarray/issues/7461 IC_kwDOAMm_X85Zm_h2 st-bender 28786187 2023-04-11T13:50:42Z 2023-04-11T13:50:42Z CONTRIBUTOR

Hi, Just to let you know that this change breaks python 3.8 setups with automatic updates becuase the pandas version is not restricted, so it will be happily updated to version 2 or higher. Which in turn is not compatible with xarray < 2023.2, which cannot be installed on python 3.8 because of this change. Don't know why the min python version was changed, this PR doesn't say why it was necessary. Cheers.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  bump minimum versions, drop py38 1550109629
1230234269 https://github.com/pydata/xarray/issues/6953#issuecomment-1230234269 https://api.github.com/repos/pydata/xarray/issues/6953 IC_kwDOAMm_X85JU-Kd st-bender 28786187 2022-08-29T12:41:47Z 2022-08-29T12:41:47Z CONTRIBUTOR

Hi @mathause

It does work if the array keeps the size:

python data.resample(index="M").apply(lambda x: x.values)

Thanks, but I am not sure I find that intuitive, why should the resampled array have the same size as the original? It seems to make only sense for DataArray.apply(), but not for a resampled one. As I indicated in my other reply, returning a scalar or equivalent should be fine, shouldn't it? At the very least the documentation is lacking, it refers to the pandas method, but clearly the behaviour is different.

As a workaround you could allow your function to consume a dummy axis. Or you could pass dim as ...

python data.resample(index="M").reduce(lambda x, axis: 1) # workaround 1 data.resample(index="M").reduce(lambda x: 1, dim=...) # workaround 2

(reduce only passes axis if dim is not None but groupby passes the group_dim per default.

That feels a bit like curing the symptoms instead of the root cause, why not set dim='...' if it is not given?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  DataArray.resample().apply() fails to apply custom function 1350803561
1228267515 https://github.com/pydata/xarray/issues/6953#issuecomment-1228267515 https://api.github.com/repos/pydata/xarray/issues/6953 IC_kwDOAMm_X85JNd_7 st-bender 28786187 2022-08-26T09:22:21Z 2022-08-26T09:22:21Z CONTRIBUTOR

@mathause Thanks, that works for np.median, but .reduce() passes an axis= keyword argument, which needs to be taken care of by any custom function. I used np.median here just as an example, as it showed the problem.

@dcherian I am not sure if raising an error would be very user friendly. In my opinion, which may be biased by my personal use cases, I would expect .apply() (or .map() for that matter) of a resample object to take any function that returns a scalar: the value at the resampled point. It should be then up to xarray to iterate over any other non-resampled dimensions. I am not sure why it requires to return a DataArray in the first place. As mentioned, it works with pandas series and dataframes, e.g.:

python data.to_pandas().resample("M").apply(np.median)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  DataArray.resample().apply() fails to apply custom function 1350803561
904084591 https://github.com/pydata/xarray/pull/5662#issuecomment-904084591 https://api.github.com/repos/pydata/xarray/issues/5662 IC_kwDOAMm_X8414zxv st-bender 28786187 2021-08-23T20:08:30Z 2021-08-23T20:11:26Z CONTRIBUTOR

@Illviljan Thanks for this comparison, I certainly prefer the second one, it looks better aligned to me and we get a little more information on one page. But then that's only me. ;)

Edit: Not sure what you mean with the attributes, they all start in the 5th column, after 4 spaces like the officially recommended python indentation.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Limit and format number of displayed dimensions in repr 957439114
903520910 https://github.com/pydata/xarray/pull/5662#issuecomment-903520910 https://api.github.com/repos/pydata/xarray/issues/5662 IC_kwDOAMm_X8412qKO st-bender 28786187 2021-08-23T07:40:06Z 2021-08-23T07:40:06Z CONTRIBUTOR

The tests look good!

Any thoughts from anyone before we merge?

Looks good, but in my opinion this wastes some screen space on the left side. One could probably start a new line for the dimensions in a Dataset: python Dimensions: (long_coord_name_0_x: 20, long_coord_name_10_x: 20, ... or with some indenting: python Dimensions: (long_coord_name_0_x: 20, long_coord_name_10_x: 20, ... Maybe the same for the DataArray: python <xarray.DataArray 'LongDataArrayName' (dim_0: 2, dim_1: 2, dim_2: 2, dim_3: 2, dim_4: 2, dim_5: 2, dim_6: 2, dim_7: 2, dim_8: 2, dim_9: 2, dim_10: 2, dim_11: 2)> Just an idea.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Limit and format number of displayed dimensions in repr 957439114
877021079 https://github.com/pydata/xarray/pull/5580#issuecomment-877021079 https://api.github.com/repos/pydata/xarray/issues/5580 MDEyOklzc3VlQ29tbWVudDg3NzAyMTA3OQ== st-bender 28786187 2021-07-09T08:41:15Z 2021-07-09T08:55:14Z CONTRIBUTOR

@keewis Thanks for the pointers, I'd say that nothing public facing should change in 0.18 now. OT (edit): By the way, these incompatibilities happen when one side decides to change the API without considering that some users may actually use that interface (and looking at the pandas' "deprecation" list, I fear that this will only get worse). Nice from the xarray people to have a section about keeping backwards compatibility as much as possible in their contribution guidelines.

As for the tests, I found the tests that @max-sixty put in and extended them (see second and third commits in this PR). However, now there is one dataset setup and then 4(!) asserts, which seems to be too much to follow nicely. Imagine all of them break, you fix the first, only to find out that the second breaks as well. so you fix that, only to find out that the third breaks too, and so on.

@Illviljan It is a good idea, however, I'd prefer if those changes were introduced as an option first, before changing the default behaviour.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  dataset `__repr__` updates 937336962
875075222 https://github.com/pydata/xarray/pull/5580#issuecomment-875075222 https://api.github.com/repos/pydata/xarray/issues/5580 MDEyOklzc3VlQ29tbWVudDg3NTA3NTIyMg== st-bender 28786187 2021-07-06T20:55:49Z 2021-07-06T21:02:46Z CONTRIBUTOR

Hi @max-sixty, Sure, but it will take a bit. Could you point me to right places for the docs? Just the filenames would do.

I would be in favour of waiting a little with this change to get a few more opinions. This change will ignore display_max_rows for everything except dataset.__repr__, which may not be what some people might expect or hope. Any ideas how to get more people to weigh in?

Edit: I would also like to separate the tests, to make it easier to follow if something breaks, but the setup for the test dataset would be the same. Any preferences or best practices for the code layout in such a case without duplicating too much of the code? However, that can probably wait for another PR.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  dataset `__repr__` updates 937336962
874209908 https://github.com/pydata/xarray/issues/5545#issuecomment-874209908 https://api.github.com/repos/pydata/xarray/issues/5545 MDEyOklzc3VlQ29tbWVudDg3NDIwOTkwOA== st-bender 28786187 2021-07-05T15:58:00Z 2021-07-05T17:43:09Z CONTRIBUTOR

Hi, @max-sixty I could give it a try, but my time is quite limited. Would you be fine with a diff? That would save me a bit from setting up a fork and new repo.

Anyway, here is a quick diff, I tried to keep it small and basically moved the max_rows setting to dataset_repr, only coords_repr takes a new keyword argument, so that should be backwards compatible. The tests would need to be updated. Maybe it is a good idea to not test _mapping_repr, but instead test coords_repr, data_vars_repr, attrs_repr, and dataset_repr separately, to check that they do what they are supposed to do regardless of their implementation?

Edit: Never mind, I am preparing a PR with updated tests.

```diff diff --git a/xarray/core/formatting.py b/xarray/core/formatting.py index 07864e81..ab30facf 100644
--- a/xarray/core/formatting.py
+++ b/xarray/core/formatting.py @@ -377,14 +377,12 @@ def _mapping_repr(
):
if col_width is None: col_width = _calculate_col_width(mapping) - if max_rows is None:
- max_rows = OPTIONS["display_max_rows"]
summary = [f"{title}:"] if mapping:
len_mapping = len(mapping) if not _get_boolean_with_default(expand_option_name, default=True): summary = [f"{summary[0]} ({len_mapping})"] - elif len_mapping > max_rows:
+ elif max_rows is not None and len_mapping > max_rows:
summary = [f"{summary[0]} ({max_rows}/{len_mapping})"] first_rows = max_rows // 2 + max_rows % 2 items = list(mapping.items())
@@ -416,7 +414,7 @@ attrs_repr = functools.partial(
)

-def coords_repr(coords, col_width=None):
+def coords_repr(coords, col_width=None, max_rows=None):
if col_width is None: col_width = _calculate_col_width(_get_col_items(coords)) return _mapping_repr( @@ -425,6 +423,7 @@ def coords_repr(coords, col_width=None): summarizer=summarize_coord, expand_option_name="display_expand_coords", col_width=col_width, + max_rows=max_rows, ) @@ -542,21 +541,22 @@ def dataset_repr(ds): summary = ["<xarray.{}>".format(type(ds).name)]

 col_width = _calculate_col_width(_get_col_items(ds.variables))
  • max_rows = OPTIONS["display_max_rows"]

    dims_start = pretty_print("Dimensions:", col_width) summary.append("{}({})".format(dims_start, dim_summary(ds)))

    if ds.coords: - summary.append(coords_repr(ds.coords, col_width=col_width)) + summary.append(coords_repr(ds.coords, col_width=col_width, max_rows=max_rows))

    unindexed_dims_str = unindexed_dims_repr(ds.dims, ds.coords) if unindexed_dims_str: summary.append(unindexed_dims_str)

  • summary.append(data_vars_repr(ds.data_vars, col_width=col_width))

  • summary.append(data_vars_repr(ds.data_vars, col_width=col_width, max_rows=max_rows))

    if ds.attrs: - summary.append(attrs_repr(ds.attrs)) + summary.append(attrs_repr(ds.attrs, max_rows=max_rows))

    return "\n".join(summary)

```

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Increase default `display_max_rows` 931591247
873193513 https://github.com/pydata/xarray/issues/5545#issuecomment-873193513 https://api.github.com/repos/pydata/xarray/issues/5545 MDEyOklzc3VlQ29tbWVudDg3MzE5MzUxMw== st-bender 28786187 2021-07-02T18:46:43Z 2021-07-02T18:46:43Z CONTRIBUTOR

@benbovy That sounds good to me. If I may add, I would leave __repr__ and __str__ to return the same things, since people seem to use them interchangeably, e.g. in tutorials, and probably in their own code and notebooks.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Increase default `display_max_rows` 931591247
872424026 https://github.com/pydata/xarray/issues/5545#issuecomment-872424026 https://api.github.com/repos/pydata/xarray/issues/5545 MDEyOklzc3VlQ29tbWVudDg3MjQyNDAyNg== st-bender 28786187 2021-07-01T17:26:23Z 2021-07-01T17:26:23Z CONTRIBUTOR

@max-sixty I apologize if I hurt someone, but it is hard to find a solution if we can't agree on the problem. Try the same examples with 50 or 100 instead of 2000 variables to understand what I mean. And to be honest, I found your comments a bit dismissive and not exactly welcoming too, which is probably also not your intention.

From what I see in the examples by @Illviljan , setting display_max_rows affects everything equally, coords, data_vars, and attrs. So there would be no need to treat them separately. Or I misunderstood your comment.

Anyway, I think I made my point, I leave it up to you to decide what you are comfortable with.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Increase default `display_max_rows` 931591247
871674435 https://github.com/pydata/xarray/issues/5545#issuecomment-871674435 https://api.github.com/repos/pydata/xarray/issues/5545 MDEyOklzc3VlQ29tbWVudDg3MTY3NDQzNQ== st-bender 28786187 2021-06-30T19:36:26Z 2021-06-30T19:36:26Z CONTRIBUTOR

Hi @Illviljan, As I mentioned earlier, your "solution" is not backwards compatible, and it would be counterproductive to update the doctest. Which is also not relevant here and a different issue.

I am not sure what you are trying to show, your datasets look very different from what I am working with, and they miss the point. Then again they also prove my point, pandas and numpy shorten in a canonical way (except the finite number of columns, which may make sense, but I don't like that either and would rather have it wrap but show all columns). xarray doesn't because usually the variables are not simply numbered as in your example.

I am talking about medium sized datasets of a few 10 to maybe a few 100 non-canonical data variables. Have a look at http://cfconventions.org/ to get an impression of real-world variable names, or the example linked above in comment https://github.com/pydata/xarray/issues/5545#issuecomment-870109486. There it would be nice to have an overview over all of them.

If too many variables are a problem, imo it would have been better to say: "We keep it as it is, however, if it is a problem for your large dataset, here is an option to reduce the amount of output: ..." And put that into the docs or the wiki or FAQ or something similar. Note that the initial point in the linked issue is about the time it takes to print all variables, not the amount that gets shown. And usually the number of coordinates and attributes is smaller than the number of data variables. It also depends on what you call "screen", my terminal has currently 48 lines (about 56 in fullscreen, depending on fontsize), and a scrollback buffer of 5000 lines, I am also used to scrolling through long jupyter notebooks. Scrolling through your examples might be tedious (not for me actually), but I will never be able to find typos hidden in the three dots.

@max-sixty No worries, I understand that this is a minor cosmetic issue, actually I intended it as a feature request, not a bug. But that must have gone missing along the way. I guess I could live with 50, any other opinions? I am sure someone else will complain about that too. ;)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Increase default `display_max_rows` 931591247
870396123 https://github.com/pydata/xarray/issues/5545#issuecomment-870396123 https://api.github.com/repos/pydata/xarray/issues/5545 MDEyOklzc3VlQ29tbWVudDg3MDM5NjEyMw== st-bender 28786187 2021-06-29T08:36:04Z 2021-06-29T08:36:04Z CONTRIBUTOR

Hi @max-sixty

We need to cut some of the output, given a dataset has arbitrary size — same as numpy arrays / pandas dataframes.

I thought about that too, but I believe these cases are slightly different. In numpy arrays you can almost guess how the full array looks like, you know the shape and get an impression of the magnitude of the entries (of course there can be exceptions which are not shown in the output). Similar for pandas series or dataframes, the skipped index values are quite easy to guess. The names of data variables in a dataset are almost impossible to guess, as are their dimensions and data types. The ellipsis is usually used to indicate some kind of continuation, which is not really the case with the data variables.

If people feel strongly about a default > 12, that seems reasonable. Do people?

I can't speak for other people, but I do, sorry about that. @shoyer 's suggestion sounds good to me, from the top of my head 30-100 variables in a dataset seems to be around what I have come across as a typical case. Which does not mean that it is the typical case.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Increase default `display_max_rows` 931591247
869950924 https://github.com/pydata/xarray/issues/5545#issuecomment-869950924 https://api.github.com/repos/pydata/xarray/issues/5545 MDEyOklzc3VlQ29tbWVudDg2OTk1MDkyNA== st-bender 28786187 2021-06-28T19:12:43Z 2021-06-28T19:12:43Z CONTRIBUTOR

I switched off html rendering altogether because that really slows down the browser, haven't had any problems with the text output. The text output is (was) also much more concise and does not require additional clicks to open the dataset and see which variables are in there.

The problem with your suggestion is that this approach is not backwards compatible, which is not nice towards long-term users. A larger default would be a bit like meeting half-way. I also respectfully disagree about the purpose of __repr__(), see for example https://docs.python.org/3/reference/datamodel.html#object.repr . Cutting the output arbitrarily does not allow one to "recreate the object".

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Increase default `display_max_rows` 931591247
869726359 https://github.com/pydata/xarray/issues/5545#issuecomment-869726359 https://api.github.com/repos/pydata/xarray/issues/5545 MDEyOklzc3VlQ29tbWVudDg2OTcyNjM1OQ== st-bender 28786187 2021-06-28T14:19:01Z 2021-06-28T14:19:01Z CONTRIBUTOR

Why not increase that number to a more sensible value (as I suggested), or make it optional if people have problems? If people are concerned and have problems, then this would be an option to fix that, not the other way around. This enforces such a low limit onto all others.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Increase default `display_max_rows` 931591247
457527937 https://github.com/pydata/xarray/issues/1756#issuecomment-457527937 https://api.github.com/repos/pydata/xarray/issues/1756 MDEyOklzc3VlQ29tbWVudDQ1NzUyNzkzNw== st-bender 28786187 2019-01-25T10:28:04Z 2019-01-25T10:28:04Z CONTRIBUTOR

Hi, Thanks for the replies, I was indeed caught by surprise and given that the version number is 0.11.x, I had the impression that 0.12.x would be the next major minor release (and coming soon).

@shoyer In that case I would take it back and vote for a change as soon as possible to stabilize the API. Although xarray is still considered beta, I guess some people already use it productively.

@jhamman Thanks for the offer, I think the changes were simple enough. I merely wanted to point out that some more people use(d) that feature.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Deprecate inplace methods 278713328
457131090 https://github.com/pydata/xarray/issues/1756#issuecomment-457131090 https://api.github.com/repos/pydata/xarray/issues/1756 MDEyOklzc3VlQ29tbWVudDQ1NzEzMTA5MA== st-bender 28786187 2019-01-24T09:40:01Z 2019-01-24T09:40:01Z CONTRIBUTOR

Hi, Sorry for the late comment about a closed bug. But I find changing the API a bit irritating to say the least, and this is a serious change. Although apparently not many people use it, some actually may (myself included). And so far there has been only one bug report, so what problem are you trying to fix? I can fix my own code but there may be others out there that cannot keep pace with the development and including their packages may then break software. For my taste the deprecation warning is a bit short if you are going to remove such a feature already in the next version. A few more cycles would be appreciated.

At the very least put a big warning sign to the documentation that xarray is still beta and the API is still subject to change.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Deprecate inplace methods 278713328
424697772 https://github.com/pydata/xarray/pull/2236#issuecomment-424697772 https://api.github.com/repos/pydata/xarray/issues/2236 MDEyOklzc3VlQ29tbWVudDQyNDY5Nzc3Mg== st-bender 28786187 2018-09-26T12:32:34Z 2018-09-26T12:35:30Z CONTRIBUTOR

Hi, just to let you know that .std() does not accept the ddof keyword anymore (it worked in 0.10.8) Should I open a new bugreport?

Edit: It fails with:

~/Work/miniconda3/envs/stats/lib/python3.6/site-packages/xarray/core/duck_array_ops.py in f(values, axis, skipna, **kwargs)
    234 
    235         try:
--> 236             return func(values, axis=axis, **kwargs)
    237         except AttributeError:
    238             if isinstance(values, dask_array_type):

TypeError: nanstd() got an unexpected keyword argument 'ddof'
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Refactor nanops 333248242

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 16.508ms · About: xarray-datasette