home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

7 rows where milestone = 799013 and type = "issue" sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: user, comments, author_association, created_at (date), updated_at (date), closed_at (date)

type 1

  • issue · 7 ✖

state 1

  • closed 7

repo 1

  • xarray 7
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
59308959 MDU6SXNzdWU1OTMwODk1OQ== 343 DataArrays initialized with the same data behave like views of each other earlew 7462311 closed 0   0.4 799013 2 2015-02-27T23:19:39Z 2015-03-03T06:02:56Z 2015-03-03T06:02:56Z NONE      

I'm not sure if this qualifies as a bug but this behavior was surprising to me. If I initialize two DataArrays with the same array, the two DataArrays and the original initialization array are all linked as if they are views of each other.

A simple example:

``` Python

initialize with same array:

a = np.zeros((4,4)) da1 = xray.DataArray(a, dims=['x', 'y']) da2 = xray.DataArray(a, dims=['i', 'j']) ```

If I do da1.loc[:, 2] = 12, the same change occurs in da2 and a. Likewise, doing da2[dict(i=1)] = 29 also modifies da1 and a.

The problem is fixed if I explicitly pass copies of the a but I think this should be the default behavior. If this behavior is intended then I think it should be clearly noted in the documentation.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/343/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
59287686 MDU6SXNzdWU1OTI4NzY4Ng== 342 Aggregations on datasets drop data variables with dtype=bool shoyer 1217238 closed 0   0.4 799013 0 2015-02-27T20:12:21Z 2015-03-02T18:14:11Z 2015-03-02T18:14:11Z MEMBER      

```

xray.Dataset({'x': 1}).isnull().sum() <xray.Dataset> Dimensions: () Coordinates: empty Data variables: empty ```

This is a bug.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/342/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
51863801 MDU6SXNzdWU1MTg2MzgwMQ== 293 Use "coordinate variables"/"data variables" instead of "coordinates"/"variables"? shoyer 1217238 closed 0   0.4 799013 0 2014-12-12T23:09:30Z 2015-02-19T19:31:11Z 2015-02-19T19:31:11Z MEMBER      

Recently, we introduced a distinction between "coordinates" and "variables" (see #197).

CF conventions make an analogous distinction between "coordinate variables" and "data variables": http://cfconventions.org/Data/cf-conventions/cf-conventions-1.6/build/cf-conventions.html

Would it be less confusing to use the CF terminology? I am leaning toward making this shift, because netCDF already has defined the term "variable", and xray's code still uses that internally. From a practical perspective, this would mean renaming Dataset.vars to Dataset.data_vars.

CC @akleeman @toddsmall

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/293/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
56817968 MDU6SXNzdWU1NjgxNzk2OA== 316 Not-quite-ISO timestamps sjpfenninger 141709 closed 0   0.4 799013 7 2015-02-06T14:30:11Z 2015-02-18T04:45:25Z 2015-02-18T04:45:25Z CONTRIBUTOR      

I have trouble reading NetCDF files obtained from MERRA. It turns out that their time unit is of the form "hours since 1982-1-10 0". Because there is only a single "0" for the hour, rather than "00", this is not an ISO compliant datetime string and pandas.Timestamp raises an error (see pydata/pandas#9434).

This makes it impossible to open such files unless passing decode_times=False to open_dataset().

I wonder if this is a rare edge case or if xray could attempt to intelligently handle it somewhere (maybe in conventions._unpack_netcdf_time_units). For now, I just used NCO to append an extra 0 to the time unit (luckily all files are the same, so I can just do this across the board): ncatted -O -a units,time,a,c,"0" file.nc

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/316/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
38110382 MDU6SXNzdWUzODExMDM4Mg== 186 Automatic label alignment shoyer 1217238 closed 0   0.4 799013 1 2014-07-17T18:18:52Z 2015-02-13T22:19:29Z 2015-02-13T22:19:29Z MEMBER      

If we want to mimic pandas, we should support automatic alignment of coordinate labels in: - [x] Mathematical operations (non-inplace, ~~in-place~~, see also #184) - [x] All operations that add new dataset variables (merge, update, __setitem__). - [x] All operations that create a new dataset ( __init__, ~~concat~~)

For the later two cases, it is not clear that using an inner join on coordinate labels is the right choice, because that could lead to some surprising destructive operations. This should be considered carefully.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/186/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
53818267 MDU6SXNzdWU1MzgxODI2Nw== 308 BUG: xray fails to read netCDF files where coordinates do not refer to valid variables shoyer 1217238 closed 0   0.4 799013 0 2015-01-09T00:05:40Z 2015-02-04T07:21:01Z 2015-02-04T07:21:01Z MEMBER      

Instead, we should verify that that coordinates refer to valid variables and fail gracefully.

As reported by @mgarvert.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/308/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
39876209 MDU6SXNzdWUzOTg3NjIwOQ== 209 Rethink silently passing TypeError when encountered during Dataset aggregations shoyer 1217238 closed 0   0.4 799013 0 2014-08-09T02:20:06Z 2015-01-04T16:05:28Z 2015-01-04T16:05:28Z MEMBER      

This is responsible for #205.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/209/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 24.044ms · About: xarray-datasette