home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

1 row where issue = 169274464 and user = 221526 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • dopplershift · 1 ✖

issue 1

  • Consider how to deal with the proliferation of decoder options on open_dataset · 1 ✖

author_association 1

  • CONTRIBUTOR 1
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
300838234 https://github.com/pydata/xarray/issues/939#issuecomment-300838234 https://api.github.com/repos/pydata/xarray/issues/939 MDEyOklzc3VlQ29tbWVudDMwMDgzODIzNA== dopplershift 221526 2017-05-11T16:08:19Z 2017-05-11T16:08:19Z CONTRIBUTOR

I agree that having too many keyword arguments is poor design; it's representative of either failing to abstract anything away or having the object/function just do too much. For a specific example, this jumps out to me as a problem: python ds = conventions.decode_cf( store, mask_and_scale=mask_and_scale, decode_times=decode_times, concat_characters=concat_characters, decode_coords=decode_coords, drop_variables=drop_variables) Already open_dataset takes 5 parameters just to pass on directly to another function. This means to add a 6th to decode_cf, you have to update the code and doctstring there, and then make those same changes to open_dataset. Now, you could argue that they're used again within the function within open_dataset python token = tokenize(file_arg, group, decode_cf, mask_and_scale, decode_times, concat_characters, decode_coords, engine, chunks, drop_variables) but again you're using all of these parameters together. If all of these variable values are needed to define the state, you already have an implicit object in your code; you're just not using the language syntax to help you by encapsulating it.

I'd be in favor of having lightweight classes (essentially mutable named tuples) vs. dictionaries. The former allows more discoverability to the interface (i.e. tab completion in IPython) as well as better up-front error checking (you could use __slots__ to permit only certain attributes). My experience with assembling dictionaries for options is a world of typo-prone pain; trying to prevent that is especially important when teaching new users. You could still give this class the right hooks (e.g. __iter__, asdict) to allow it to be passed as **kwargs to decode_cf.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Consider how to deal with the proliferation of decoder options on open_dataset 169274464

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 2877.312ms · About: xarray-datasette