home / github / issue_comments

Menu
  • GraphQL API
  • Search all tables

issue_comments: 870396123

This data as json

html_url issue_url id node_id user created_at updated_at author_association body reactions performed_via_github_app issue
https://github.com/pydata/xarray/issues/5545#issuecomment-870396123 https://api.github.com/repos/pydata/xarray/issues/5545 870396123 MDEyOklzc3VlQ29tbWVudDg3MDM5NjEyMw== 28786187 2021-06-29T08:36:04Z 2021-06-29T08:36:04Z CONTRIBUTOR

Hi @max-sixty

We need to cut some of the output, given a dataset has arbitrary size — same as numpy arrays / pandas dataframes.

I thought about that too, but I believe these cases are slightly different. In numpy arrays you can almost guess how the full array looks like, you know the shape and get an impression of the magnitude of the entries (of course there can be exceptions which are not shown in the output). Similar for pandas series or dataframes, the skipped index values are quite easy to guess. The names of data variables in a dataset are almost impossible to guess, as are their dimensions and data types. The ellipsis is usually used to indicate some kind of continuation, which is not really the case with the data variables.

If people feel strongly about a default > 12, that seems reasonable. Do people?

I can't speak for other people, but I do, sorry about that. @shoyer 's suggestion sounds good to me, from the top of my head 30-100 variables in a dataset seems to be around what I have come across as a typical case. Which does not mean that it is the typical case.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  931591247
Powered by Datasette · Queries took 0.647ms · About: xarray-datasette