home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

8 rows where issue = 33637243 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 2

  • shoyer 4
  • jhamman 4

issue 1

  • Dataset summary methods · 8 ✖

author_association 1

  • MEMBER 8
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
43365791 https://github.com/pydata/xarray/issues/131#issuecomment-43365791 https://api.github.com/repos/pydata/xarray/issues/131 MDEyOklzc3VlQ29tbWVudDQzMzY1Nzkx shoyer 1217238 2014-05-16T18:44:26Z 2014-05-16T18:44:26Z MEMBER

Module wide configuration flags are generally a bad idea, because such non-local effects make it harder to predict how code works. This is less of a concern for configuration options which only change how objects are displayed, which I believe is the only way such flags are used in numpy or pandas.

But I don't have any objections to adding a method option.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dataset summary methods 33637243
43359850 https://github.com/pydata/xarray/issues/131#issuecomment-43359850 https://api.github.com/repos/pydata/xarray/issues/131 MDEyOklzc3VlQ29tbWVudDQzMzU5ODUw jhamman 2443309 2014-05-16T17:49:14Z 2014-05-16T17:49:14Z MEMBER

Both NCO and CDO keep all attributes, and as you mention, maintain a history attribute. Even for operations like "variance" where the units are no longer accurate.

Maybe we're headed to a user specified option to keep the attributes around with the default being option 1. I can see this existing at any (but probably not all) of these levels: - module (xray.maintain_attributes=True) - class (keyword in Dataset or DataArray __init__(self, ..., maintain_attributes=True) - method (ds.mean(dim='time', maintain_attributes=True)

This approach would put the onus on the user to specify they want to keep metadata around. My preference would be to apply this at the module level.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dataset summary methods 33637243
43356581 https://github.com/pydata/xarray/issues/131#issuecomment-43356581 https://api.github.com/repos/pydata/xarray/issues/131 MDEyOklzc3VlQ29tbWVudDQzMzU2NTgx shoyer 1217238 2014-05-16T17:17:16Z 2014-05-16T17:17:16Z MEMBER

You're right that keeping attributes fully intact under any operation is a perfectly reasonable alternative to dropping them.

So what do NCO and CDO do with attributes when you calculate the variance along a dimension of a variable? The choices, as I see them, are: 1. Drop all attributes 2. Keep all attributes 3. Keep all attributes with the exception of "units" (which is dropped) 4. Keep all attributes, but modify "units" according to the mathematical operation

For xray, 2 is out, because it leaves wrong metadata intact. 3 and 4 are out, because we don't want to be in the business of relying on metadata. This leaves 1 -- dropping all attributes.

For consistency, if 1 is the choice we need to make for "variance", then the same rule should apply for all "reduce" operations, including apparently innocuous operations like "mean". Note that this is also consistent with how xray handles attributes all other mathematical operations -- even adding 0 or multiplying by 1 removes all attributes.

My sense (not being a heavy user of these tools) is that NCO and CDO have a little bit more freedom to keep around metadata because they maintain a "history" attribute.

Loading files from disk is a little different. Notice that once variables get loaded into xray, any attributes that were used for decoding have been removed from "attributes" and moved to "encoding". The meaningful attributes only exist on files on disk (unavoidable given the limitations of NetCDF).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dataset summary methods 33637243
43294717 https://github.com/pydata/xarray/issues/131#issuecomment-43294717 https://api.github.com/repos/pydata/xarray/issues/131 MDEyOklzc3VlQ29tbWVudDQzMjk0NzE3 shoyer 1217238 2014-05-16T04:07:46Z 2014-05-16T16:43:36Z MEMBER

As a note on your points (1) and (2): currently, we remove all dataset and array attributes when doing any operations other than (re)indexing. This includes when reduce operations like mean are applied, because it didn't seem safe to assume that the original attributes were still descriptive. In particular, I was worried about units.

I'm willing to reconsider this, but in general I would like to avoid any functionality that is metadata aware other than dimension and coordinate labels. In my experience, systems that rely on attributes become much more complex and harder to predict, so I would like to avoid that. I don't see a unit system as in scope for xray, at least not at this time.

Your solution 4(b) -- dropping coordinates rather than attempting to summarize them -- would also be my preferred approach. It is consistent with pandas (try df.mean(level='time')) and quite often labels can't be meaningfully reduced anyways (e.g., suppose a coordinate's ticks are labeled by datetimes or worse, strings).

Speaking of non-numerical data, we will need to take an approach like pandas to ignore non-numerical variables with taking the mean. It might be worth taking a look at how pandas handles this, but I imagine using a try/except clause would be the sensible way to do that.

In you're interested in taking a crack at implementation, take a look at DataArray.reduce and Variable.reduce. Once we have a generic reduce function that handles the labels, injecting the all numpy methods like mean and sum is trivial.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dataset summary methods 33637243
43351948 https://github.com/pydata/xarray/issues/131#issuecomment-43351948 https://api.github.com/repos/pydata/xarray/issues/131 MDEyOklzc3VlQ29tbWVudDQzMzUxOTQ4 jhamman 2443309 2014-05-16T16:32:44Z 2014-05-16T16:32:44Z MEMBER

A couple more thoughts.

I agree that staying metatdata unaware is the best course of action. However, I think you can do that but still carry the dataset and variable attributes (in the same manor that NCO and CDO do). You just want to be explicit in the documentation by saying that the attributes are from the original dataset and that xray is not attribute aware or a units system (except for the time variable I guess).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dataset summary methods 33637243
43300537 https://github.com/pydata/xarray/issues/131#issuecomment-43300537 https://api.github.com/repos/pydata/xarray/issues/131 MDEyOklzc3VlQ29tbWVudDQzMzAwNTM3 jhamman 2443309 2014-05-16T06:15:03Z 2014-05-16T06:15:03Z MEMBER

I'm willing to take a crack at it but I'm guessing I'll be requesting some assistance along the way. Let me look into a bit and I'll report back with how I see it going together.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dataset summary methods 33637243
43291229 https://github.com/pydata/xarray/issues/131#issuecomment-43291229 https://api.github.com/repos/pydata/xarray/issues/131 MDEyOklzc3VlQ29tbWVudDQzMjkxMjI5 jhamman 2443309 2014-05-16T03:06:41Z 2014-05-16T03:07:16Z MEMBER

I'm not sure we need to worry about the string representation too much. The pandas.Panel has a limited string representation too - example. Then again, I find the pandas pannels difficult to work with. Maybe adding a thorough Dataset.describe() method would suffice.

To flush out some of the desired functionality a bit more: (I'm going to use numpy.mean as an example but any numpy reduction function could be applied) 1. Dataset.mean() returns a new Dataset, with all the variables and attributes from the original Dataset reduced along all dimensions. 2. Dataset.mean(dim='some_dim_name') returns a new Dataset, with all the variables and attributes from the original Dataset reduced along the sum_dim_name dimension. 3. Dataset.mean(dim=['Y', 'X']) returns a new Dataset, with all the variables from the original Dataset reduced along the Y and X dimensions. 4. What to do with the reduced dimensions/variables? Reduced variables (e.g. when the mean is taken along the time dimension) could be a) reduced in the same manner (e.g. leave the time variable in the Dataset and just take the mean of the time array), b) removed, thereby reducing the Dataset's dimensions. I think the cleanest way would be to remove the reduced dimensions/variables (b). 5. Any implementation should play nice with the Dataset.groupby objects (#122).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dataset summary methods 33637243
43282058 https://github.com/pydata/xarray/issues/131#issuecomment-43282058 https://api.github.com/repos/pydata/xarray/issues/131 MDEyOklzc3VlQ29tbWVudDQzMjgyMDU4 shoyer 1217238 2014-05-16T00:29:26Z 2014-05-16T01:46:59Z MEMBER

Thanks for raising this as a separate issue. Yes, I agree it would be nice to add these summary methods! We can imagine DataArray methods on Datasets mapping over all variables in a somewhat similar way to how groupby methods map over each group.

These methods are very convenient for pandas.DataFrame objects, so it makes sense to have them for xray.Dataset, too.

The only unfortunate aspect that is that it is harder to see the values in a Dataset, because they aren't given in the standard string representation. In contrast, methods like DataFrame.describe() (or even just DataFrame.mean() are more convenient because they give you another DataFrame back, which shows all the relevant values. I'm not sure if the solution is come up with a better Dataset representation which shows more numbers, or to just encourage the use of to_dataframe().

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dataset summary methods 33637243

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 12.281ms · About: xarray-datasette