home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

4 rows where author_association = "MEMBER" and issue = 548029687 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 2

  • dcherian 2
  • TomNicholas 2

issue 1

  • concat result not correct for particular dataset · 4 ✖

author_association 1

  • MEMBER · 4 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
573082250 https://github.com/pydata/xarray/issues/3681#issuecomment-573082250 https://api.github.com/repos/pydata/xarray/issues/3681 MDEyOklzc3VlQ29tbWVudDU3MzA4MjI1MA== dcherian 2448579 2020-01-10T15:31:49Z 2020-01-10T15:31:49Z MEMBER

Ah thanks, there is a bug here.

xarray is trying to overwrite indexes for the bnds dimension but there are no indexes associated with that dimension. As a work around, you can assign values for bnds: ds1["bnds"] = [0,1] and ds2["bnds"] = [0,1] and then use join="override"

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  concat result not correct for particular dataset 548029687
573041249 https://github.com/pydata/xarray/issues/3681#issuecomment-573041249 https://api.github.com/repos/pydata/xarray/issues/3681 MDEyOklzc3VlQ29tbWVudDU3MzA0MTI0OQ== dcherian 2448579 2020-01-10T13:45:58Z 2020-01-10T13:45:58Z MEMBER

Since lat is a dimension coordinate or "index variable" you want join="override" instead of compat="override".

A PR to make the docs clearer on this would be appreciated!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  concat result not correct for particular dataset 548029687
573034827 https://github.com/pydata/xarray/issues/3681#issuecomment-573034827 https://api.github.com/repos/pydata/xarray/issues/3681 MDEyOklzc3VlQ29tbWVudDU3MzAzNDgyNw== TomNicholas 35968931 2020-01-10T13:27:52Z 2020-01-10T13:27:52Z MEMBER

Looks like instead of using compat you can tell concat to just use the indexers (so lat, lon etc.) from the leftmost object using join='left', so it will ignore the (small) differences and return you python ds3 = xr.concat([ds1,ds2], dim='time', join='left')

<xarray.Dataset> Dimensions: (bnds: 2, lat: 96, lon: 144, time: 1152) Coordinates: height float64 2.0 * lon (lon) float64 0.0 2.5 5.0 7.5 10.0 ... 350.0 352.5 355.0 357.5 * lat (lat) float64 -90.0 -88.11 -86.21 -84.32 ... 86.21 88.11 90.0 * time (time) object 2006-01-16 12:00:00 ... 2101-12-16 12:00:00 Dimensions without coordinates: bnds Data variables: time_bnds (time, bnds) object 2006-01-01 00:00:00 ... 2102-01-01 00:00:00 lat_bnds (time, lat, bnds) float64 -90.0 -89.05 -89.05 ... nan nan nan lon_bnds (time, lon, bnds) float64 -1.25 1.25 1.25 ... 356.2 356.2 358.8 tas (time, lat, lon) float32 244.75986 244.83588 ... nan nan Attributes: institution: Norwegian Climate Centre institute_id: NCC experiment_id: rcp26 source: NorESM1-ME 2011 atmosphere: CAM-Oslo (CAM4-Oslo-... model_id: NorESM1-ME forcing: GHG, SA, Oz, Sl, BC, OC parent_experiment_id: historical parent_experiment_rip: r1i1p1 branch_time: 56940.0 contact: Please send any requests or bug reports to noresm... initialization_method: 1 physics_version: 1 tracking_id: fd266701-a253-4b74-91e1-5c0213483ba2 product: output experiment: RCP2.6 frequency: mon creation_date: 2012-05-16T19:40:05Z history: 2012-05-16T19:40:05Z CMOR rewrote data to comply ... Conventions: CF-1.4 project_id: CMIP5 table_id: Table Amon (01 February 2012) 81f919710c21dca8a17... title: NorESM1-ME model output prepared for CMIP5 RCP2.6 parent_experiment: historical modeling_realm: atmos realization: 1 cmor_version: 2.7.1 Is that what you're after? Best to check through the data explicitly

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  concat result not correct for particular dataset 548029687
573023497 https://github.com/pydata/xarray/issues/3681#issuecomment-573023497 https://api.github.com/repos/pydata/xarray/issues/3681 MDEyOklzc3VlQ29tbWVudDU3MzAyMzQ5Nw== TomNicholas 35968931 2020-01-10T12:51:49Z 2020-01-10T12:51:49Z MEMBER

even though the latitude arrays are completely identical AFAIK.

Are they completely identical? I see you've used np.allclose to assert that they are identical rather than np.array_equal. You might want to try concat with compat='override'?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  concat result not correct for particular dataset 548029687

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 15.223ms · About: xarray-datasette