home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

39 rows where milestone = 799013 and repo = 13221727 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: user, comments, author_association, created_at (date), updated_at (date), closed_at (date)

type 2

  • pull 32
  • issue 7

state 1

  • closed 39

repo 1

  • xarray · 39 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
57577216 MDExOlB1bGxSZXF1ZXN0MjkyNTA3MjA= 321 Automatic label-based alignment for math and Dataset constructor shoyer 1217238 closed 0   0.4 799013 0 2015-02-13T09:31:43Z 2015-03-03T06:24:02Z 2015-02-13T22:19:29Z MEMBER   0 pydata/xarray/pulls/321

Fixes #186.

This will be a major breaking change for v0.4. For example, we can now do things like this:

``` In [5]: x = xray.DataArray(range(5), dims='x')

In [6]: x Out[6]: <xray.DataArray (x: 5)> array([0, 1, 2, 3, 4]) Coordinates: * x (x) int64 0 1 2 3 4

In [7]: x[:4] + x[1:] Out[7]: <xray.DataArray (x: 3)> array([2, 4, 6]) Coordinates: * x (x) int64 1 2 3 ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/321/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
59599124 MDExOlB1bGxSZXF1ZXN0MzAzNDg3NTA= 356 Documentation updates shoyer 1217238 closed 0   0.4 799013 0 2015-03-03T06:01:03Z 2015-03-03T06:02:57Z 2015-03-03T06:02:56Z MEMBER   0 pydata/xarray/pulls/356

Fixes #343

(among other small changes)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/356/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
59308959 MDU6SXNzdWU1OTMwODk1OQ== 343 DataArrays initialized with the same data behave like views of each other earlew 7462311 closed 0   0.4 799013 2 2015-02-27T23:19:39Z 2015-03-03T06:02:56Z 2015-03-03T06:02:56Z NONE      

I'm not sure if this qualifies as a bug but this behavior was surprising to me. If I initialize two DataArrays with the same array, the two DataArrays and the original initialization array are all linked as if they are views of each other.

A simple example:

``` Python

initialize with same array:

a = np.zeros((4,4)) da1 = xray.DataArray(a, dims=['x', 'y']) da2 = xray.DataArray(a, dims=['i', 'j']) ```

If I do da1.loc[:, 2] = 12, the same change occurs in da2 and a. Likewise, doing da2[dict(i=1)] = 29 also modifies da1 and a.

The problem is fixed if I explicitly pass copies of the a but I think this should be the default behavior. If this behavior is intended then I think it should be clearly noted in the documentation.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/343/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
59592828 MDExOlB1bGxSZXF1ZXN0MzAzNDUzODA= 355 Partial fix for netCDF4 datetime issues shoyer 1217238 closed 0   0.4 799013 0 2015-03-03T04:11:51Z 2015-03-03T05:02:54Z 2015-03-03T05:02:52Z MEMBER   0 pydata/xarray/pulls/355

xref #340

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/355/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
59548752 MDExOlB1bGxSZXF1ZXN0MzAzMTk2Mzc= 351 Switch the name of datetime components from 'time.month' to 'month' shoyer 1217238 closed 0   0.4 799013 0 2015-03-02T20:55:24Z 2015-03-02T23:20:09Z 2015-03-02T23:20:07Z MEMBER   0 pydata/xarray/pulls/351

Fixes #345

This lets you write things like:

counts = time.groupby('time.month').count() counts.sel(month=2)

instead of the previously valid

counts.sel(**{'time.month': 2})

which is much more awkward. Note that this breaks existing code which relied on the old usage.

CC @jhamman

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/351/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
59529630 MDExOlB1bGxSZXF1ZXN0MzAzMTA1Mjc= 350 Fix Dataset repr with netcdf4 datetime objects shoyer 1217238 closed 0   0.4 799013 0 2015-03-02T19:06:28Z 2015-03-02T19:25:01Z 2015-03-02T19:25:00Z MEMBER   0 pydata/xarray/pulls/350

Fixes #347

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/350/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
59431613 MDExOlB1bGxSZXF1ZXN0MzAyNTQ5NjE= 348 Fix Dataset aggregate boolean shoyer 1217238 closed 0   0.4 799013 0 2015-03-02T02:26:27Z 2015-03-02T18:14:12Z 2015-03-02T18:14:11Z MEMBER   0 pydata/xarray/pulls/348

Fixes #342

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/348/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
59287686 MDU6SXNzdWU1OTI4NzY4Ng== 342 Aggregations on datasets drop data variables with dtype=bool shoyer 1217238 closed 0   0.4 799013 0 2015-02-27T20:12:21Z 2015-03-02T18:14:11Z 2015-03-02T18:14:11Z MEMBER      

```

xray.Dataset({'x': 1}).isnull().sum() <xray.Dataset> Dimensions: () Coordinates: empty Data variables: empty ```

This is a bug.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/342/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
59420145 MDExOlB1bGxSZXF1ZXN0MzAyNDk4NzU= 346 Fix bug where Coordinates could turn Variable objects in Dataset constructor shoyer 1217238 closed 0   0.4 799013 0 2015-03-01T22:08:36Z 2015-03-01T23:57:58Z 2015-03-01T23:57:55Z MEMBER   0 pydata/xarray/pulls/346

This manifested itself in some variables not being written to netCDF files, because they were determined to be trivial indexes (hence that logic was also updated to be slightly less questionable).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/346/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
59032389 MDExOlB1bGxSZXF1ZXN0MzAwNTY0MDk= 337 Cleanup (mostly documentation) shoyer 1217238 closed 0   0.4 799013 3 2015-02-26T07:40:01Z 2015-02-27T22:22:47Z 2015-02-26T07:43:37Z MEMBER   0 pydata/xarray/pulls/337
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/337/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
59033971 MDExOlB1bGxSZXF1ZXN0MzAwNTcxNTI= 338 Truncate long attributes when printing datasets shoyer 1217238 closed 0   0.4 799013 0 2015-02-26T07:57:40Z 2015-02-26T08:06:17Z 2015-02-26T08:06:05Z MEMBER   0 pydata/xarray/pulls/338

Only the first 500 characters are now shown, e.g.,

In [2]: xray.Dataset(attrs={'foo': 'bar' * 1000}) Out[2]: <xray.Dataset> Dimensions: () Coordinates: *empty* Data variables: *empty* Attributes: foo: barbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarb arbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarba rbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbar barbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarb arbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarba rbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbar barbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarb arbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarba...

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/338/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
58682523 MDExOlB1bGxSZXF1ZXN0Mjk4NjQ5NzA= 334 Fix bug associated with reading / writing of mixed endian data. akleeman 514053 closed 0   0.4 799013 1 2015-02-24T01:57:43Z 2015-02-26T04:45:18Z 2015-02-26T04:45:18Z CONTRIBUTOR   0 pydata/xarray/pulls/334

The right solution to this is to figure out how to successfully round trip endian-ness, but that seems to be a deeper issue inside netCDF4 (https://github.com/Unidata/netcdf4-python/issues/346)

Instead we force all data to little endian before netCDF4 write.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/334/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
58854176 MDExOlB1bGxSZXF1ZXN0Mjk5NjI4MzY= 335 Add broadcast_equals method to Dataset and DataArray shoyer 1217238 closed 0   0.4 799013 0 2015-02-25T05:51:46Z 2015-02-26T04:35:52Z 2015-02-26T04:35:49Z MEMBER   0 pydata/xarray/pulls/335
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/335/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
58857388 MDExOlB1bGxSZXF1ZXN0Mjk5NjQyMTE= 336 Add Dataset.drop and DataArray.drop shoyer 1217238 closed 0   0.4 799013 0 2015-02-25T06:35:18Z 2015-02-25T22:01:49Z 2015-02-25T22:01:49Z MEMBER   0 pydata/xarray/pulls/336

These are convenient shortcuts for removing variables or index labels from an xray object.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/336/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
58537752 MDExOlB1bGxSZXF1ZXN0Mjk3ODM4Njc= 330 Improved error handling for datetime decoding errors shoyer 1217238 closed 0   0.4 799013 0 2015-02-23T03:08:40Z 2015-02-25T22:00:36Z 2015-02-23T03:11:03Z MEMBER   0 pydata/xarray/pulls/330

Fixes #323

We now get an error message with a lovely traceback when opening a dataset with invalid time units. For example:

Traceback (most recent call last): File "/Users/shoyer/dev/xray/xray/test/test_conventions.py", line 340, in test_invalid_units_raises_eagerly decode_cf(ds) File "/Users/shoyer/dev/xray/xray/conventions.py", line 775, in decode_cf decode_coords) File "/Users/shoyer/dev/xray/xray/conventions.py", line 716, in decode_cf_variables decode_times=decode_times) File "/Users/shoyer/dev/xray/xray/conventions.py", line 676, in decode_cf_variable data = DecodedCFDatetimeArray(data, units, calendar) File "/Users/shoyer/dev/xray/xray/conventions.py", line 340, in __init__ raise ValueError(msg) ValueError: unable to decode time units 'foobar since 123' with the default calendar. Try opening your dataset with decode_times=False. Full traceback: Traceback (most recent call last): File "/Users/shoyer/dev/xray/xray/conventions.py", line 331, in __init__ decode_cf_datetime(example_value, units, calendar) File "/Users/shoyer/dev/xray/xray/conventions.py", line 130, in decode_cf_datetime delta = _netcdf_to_numpy_timeunit(delta) File "/Users/shoyer/dev/xray/xray/conventions.py", line 72, in _netcdf_to_numpy_timeunit return {'seconds': 's', 'minutes': 'm', 'hours': 'h', 'days': 'D'}[units] KeyError: 'foobars'

Also includes a fix for a datetime decoding issue reported on the mailing list: https://groups.google.com/forum/#!topic/xray-dev/Sscsw5dQAqQ

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/330/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
58679397 MDExOlB1bGxSZXF1ZXN0Mjk4NjMyMjU= 333 Unify netCDF4 and scipy backends in the public API shoyer 1217238 closed 0   0.4 799013 0 2015-02-24T01:20:01Z 2015-02-25T06:21:03Z 2015-02-25T06:21:01Z MEMBER   0 pydata/xarray/pulls/333

Fixes #273 and half of #272

To serialize a dataset to a string/bytes, simply use ds.to_netcdf(). This behavior copies DataFrame.to_csv() from pandas. The legacy dump and dumps methods are deprecated.

My main concern is that the default "format" option is depends on what dependencies the user has installed or if they are saving a file. That seems non-ideal, but may perhaps be the most pragmatic choice given the limitations of the netCDF4 format.

This change also adds: - Support for writing datasets to a particular NETCDF4 group - Support for opening netCDF3 files from disk even without netCDF4-python if scipy is installed.

CC @akleeman

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/333/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
58635715 MDExOlB1bGxSZXF1ZXN0Mjk4Mzc2ODU= 332 Update time.season to use text labels like 'DJF' shoyer 1217238 closed 0   0.4 799013 0 2015-02-23T19:31:58Z 2015-02-23T19:43:41Z 2015-02-23T19:43:39Z MEMBER   0 pydata/xarray/pulls/332

Previously, I used numbers 1 through 4 for the sake of consistency with pandas, but such labels really were impossible to keep track of.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/332/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
58545451 MDExOlB1bGxSZXF1ZXN0Mjk3ODc1NzM= 331 Documentation updates anticipating v0.4 shoyer 1217238 closed 0   0.4 799013 0 2015-02-23T05:51:42Z 2015-02-23T06:18:39Z 2015-02-23T06:18:35Z MEMBER   0 pydata/xarray/pulls/331
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/331/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
53659919 MDExOlB1bGxSZXF1ZXN0MjY5NzM5NTc= 306 Fix coercion of numeric strings to objects shoyer 1217238 closed 0   0.4 799013 0 2015-01-07T17:45:23Z 2015-02-23T06:09:10Z 2015-01-07T18:14:31Z MEMBER   0 pydata/xarray/pulls/306

Fixes #305

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/306/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
58474512 MDExOlB1bGxSZXF1ZXN0Mjk3NTk4MTk= 329 Dataset.apply works if func returns like-shaped ndarrays shoyer 1217238 closed 0   0.4 799013 0 2015-02-21T20:54:00Z 2015-02-23T00:35:25Z 2015-02-23T00:35:23Z MEMBER   0 pydata/xarray/pulls/329

This extends the recent change by @IamJeffG (#327).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/329/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
58307190 MDExOlB1bGxSZXF1ZXN0Mjk2NzEzMjg= 327 Cleanly apply generic ndarrays to DataArray.groupby IamJeffG 2002703 closed 0   0.4 799013 1 2015-02-20T03:47:15Z 2015-02-20T04:41:10Z 2015-02-20T04:41:08Z CONTRIBUTOR   0 pydata/xarray/pulls/327

This is special cased for np.ndarrays: applying to DataArrays is not only inefficient but would also be wrong if the applied function wanted to change metadata.

Fixes #326

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/327/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
58182276 MDExOlB1bGxSZXF1ZXN0Mjk1OTUyMjU= 325 Rename Dataset.vars -> data_vars and remove deprecated aliases shoyer 1217238 closed 0   0.4 799013 0 2015-02-19T09:01:45Z 2015-02-19T19:31:17Z 2015-02-19T19:31:11Z MEMBER   0 pydata/xarray/pulls/325

Fixes #293

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/325/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
51863801 MDU6SXNzdWU1MTg2MzgwMQ== 293 Use "coordinate variables"/"data variables" instead of "coordinates"/"variables"? shoyer 1217238 closed 0   0.4 799013 0 2014-12-12T23:09:30Z 2015-02-19T19:31:11Z 2015-02-19T19:31:11Z MEMBER      

Recently, we introduced a distinction between "coordinates" and "variables" (see #197).

CF conventions make an analogous distinction between "coordinate variables" and "data variables": http://cfconventions.org/Data/cf-conventions/cf-conventions-1.6/build/cf-conventions.html

Would it be less confusing to use the CF terminology? I am leaning toward making this shift, because netCDF already has defined the term "variable", and xray's code still uses that internally. From a practical perspective, this would mean renaming Dataset.vars to Dataset.data_vars.

CC @akleeman @toddsmall

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/293/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
56817968 MDU6SXNzdWU1NjgxNzk2OA== 316 Not-quite-ISO timestamps sjpfenninger 141709 closed 0   0.4 799013 7 2015-02-06T14:30:11Z 2015-02-18T04:45:25Z 2015-02-18T04:45:25Z CONTRIBUTOR      

I have trouble reading NetCDF files obtained from MERRA. It turns out that their time unit is of the form "hours since 1982-1-10 0". Because there is only a single "0" for the hour, rather than "00", this is not an ISO compliant datetime string and pandas.Timestamp raises an error (see pydata/pandas#9434).

This makes it impossible to open such files unless passing decode_times=False to open_dataset().

I wonder if this is a rare edge case or if xray could attempt to intelligently handle it somewhere (maybe in conventions._unpack_netcdf_time_units). For now, I just used NCO to append an extra 0 to the time unit (luckily all files are the same, so I can just do this across the board): ncatted -O -a units,time,a,c,"0" file.nc

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/316/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
58024857 MDExOlB1bGxSZXF1ZXN0Mjk1MDA2ODc= 322 Support reindexing with an optional fill method shoyer 1217238 closed 0   0.4 799013 0 2015-02-18T04:32:47Z 2015-02-18T04:42:00Z 2015-02-18T04:41:59Z MEMBER   0 pydata/xarray/pulls/322

e.g., pad, backfill or nearest

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/322/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
38110382 MDU6SXNzdWUzODExMDM4Mg== 186 Automatic label alignment shoyer 1217238 closed 0   0.4 799013 1 2014-07-17T18:18:52Z 2015-02-13T22:19:29Z 2015-02-13T22:19:29Z MEMBER      

If we want to mimic pandas, we should support automatic alignment of coordinate labels in: - [x] Mathematical operations (non-inplace, ~~in-place~~, see also #184) - [x] All operations that add new dataset variables (merge, update, __setitem__). - [x] All operations that create a new dataset ( __init__, ~~concat~~)

For the later two cases, it is not clear that using an inner join on coordinate labels is the right choice, because that could lead to some surprising destructive operations. This should be considered carefully.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/186/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
57217507 MDExOlB1bGxSZXF1ZXN0MjkwMzMyMTA= 318 Fix DataArray.loc indexing with Ellipsis: da.loc[...] shoyer 1217238 closed 0   0.4 799013 0 2015-02-10T18:46:37Z 2015-02-10T18:59:32Z 2015-02-10T18:59:31Z MEMBER   0 pydata/xarray/pulls/318
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/318/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
57198686 MDExOlB1bGxSZXF1ZXN0MjkwMjE2Mzg= 317 Fall back to netCDF4 if pandas can’t parse a date sjpfenninger 141709 closed 0   0.4 799013 1 2015-02-10T16:31:34Z 2015-02-10T18:37:35Z 2015-02-10T18:37:32Z CONTRIBUTOR   0 pydata/xarray/pulls/317

Addresses #316

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/317/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
56767996 MDExOlB1bGxSZXF1ZXN0Mjg3ODM5OTc= 315 Bug fix for multidimensional reindex edge case shoyer 1217238 closed 0   0.4 799013 0 2015-02-06T04:09:09Z 2015-02-06T04:10:23Z 2015-02-06T04:10:21Z MEMBER   0 pydata/xarray/pulls/315
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/315/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
56489618 MDExOlB1bGxSZXF1ZXN0Mjg2MTc5MTQ= 313 Fix decoding missing coordinates shoyer 1217238 closed 0   0.4 799013 0 2015-02-04T07:19:01Z 2015-02-04T07:21:03Z 2015-02-04T07:21:01Z MEMBER   0 pydata/xarray/pulls/313

Fixes #308

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/313/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
53818267 MDU6SXNzdWU1MzgxODI2Nw== 308 BUG: xray fails to read netCDF files where coordinates do not refer to valid variables shoyer 1217238 closed 0   0.4 799013 0 2015-01-09T00:05:40Z 2015-02-04T07:21:01Z 2015-02-04T07:21:01Z MEMBER      

Instead, we should verify that that coordinates refer to valid variables and fail gracefully.

As reported by @mgarvert.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/308/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
56479006 MDExOlB1bGxSZXF1ZXN0Mjg2MTI3NzM= 312 BUG: Fix slicing with negative step size shoyer 1217238 closed 0   0.4 799013 0 2015-02-04T04:32:07Z 2015-02-04T04:34:46Z 2015-02-04T04:34:39Z MEMBER   0 pydata/xarray/pulls/312

As identified here: https://github.com/perrette/dimarray/commit/ad4ab4d049f49881b28120d276337b2cab5e4061

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/312/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
54966501 MDExOlB1bGxSZXF1ZXN0Mjc3Mjg5MzI= 311 Bug fix for DataArray.to_dataframe with coords with different dimensions shoyer 1217238 closed 0   0.4 799013 0 2015-01-21T01:40:06Z 2015-01-21T01:44:29Z 2015-01-21T01:44:28Z MEMBER   0 pydata/xarray/pulls/311
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/311/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
54391570 MDExOlB1bGxSZXF1ZXN0MjczOTI5OTU= 310 More robust CF datetime unit parsing akleeman 514053 closed 0 shoyer 1217238 0.4 799013 1 2015-01-14T23:19:07Z 2015-01-14T23:36:34Z 2015-01-14T23:35:27Z CONTRIBUTOR   0 pydata/xarray/pulls/310

This makes it possible to read datasets that don't follow CF datetime conventions perfectly, such as the following example which (surprisingly) comes from NCEP/NCAR (you'd think they would follow CF!)

``` ds = xray.open_dataset('http://thredds.ucar.edu/thredds/dodsC/grib/NCEP/GEFS/Global_1p0deg_Ensemble/members/GEFS_Global_1p0deg_Ensemble_20150114_1200.grib2/GC') print ds['time'].encoding['units']

u'Hour since 2015-01-14T12:00:00Z' ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/310/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
54012349 MDExOlB1bGxSZXF1ZXN0MjcxNzEyMDU= 309 Fix typos in docs eriknw 2058401 closed 0   0.4 799013 1 2015-01-12T01:00:09Z 2015-01-12T01:44:11Z 2015-01-12T01:43:43Z CONTRIBUTOR   0 pydata/xarray/pulls/309
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/309/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
53719931 MDExOlB1bGxSZXF1ZXN0MjcwMTAyODg= 307 Skip NA in groupby groups shoyer 1217238 closed 0   0.4 799013 0 2015-01-08T06:40:17Z 2015-01-08T06:51:12Z 2015-01-08T06:51:10Z MEMBER   0 pydata/xarray/pulls/307

This makes xray consistent with the behavior of pandas.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/307/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
53314251 MDExOlB1bGxSZXF1ZXN0MjY3ODY1NzI= 304 Switch to use nan-skipping aggregation functions by default and add .median() method shoyer 1217238 closed 0   0.4 799013 0 2015-01-03T20:19:26Z 2015-01-04T16:05:30Z 2015-01-04T16:05:28Z MEMBER   0 pydata/xarray/pulls/304

TODO: - ~~update documentation~~ (I'll do this later) - ~~update minimum required numpy version to 1.9? (so we can use np.nanmedian)~~ (added an informative error message for median)

fixes #209 xref #130

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/304/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
39876209 MDU6SXNzdWUzOTg3NjIwOQ== 209 Rethink silently passing TypeError when encountered during Dataset aggregations shoyer 1217238 closed 0   0.4 799013 0 2014-08-09T02:20:06Z 2015-01-04T16:05:28Z 2015-01-04T16:05:28Z MEMBER      

This is responsible for #205.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/209/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
53064234 MDExOlB1bGxSZXF1ZXN0MjY2NTczMzY= 302 Variables no longer conflict if they are broadcast equal and rather are promoted to use common dimensions shoyer 1217238 closed 0   0.4 799013 0 2014-12-29T19:19:42Z 2014-12-29T19:53:14Z 2014-12-29T19:52:57Z MEMBER   0 pydata/xarray/pulls/302

Fixes #243.

The idea here is that variables should not conflict if they are equal after being broadcast against each other; rather variables should be promoted to the common dimensions. This should resolve a number of annoyances causes by mixing scalar and non-scalar variables.

This PR includes fixes for concat, Dataset.merge (and thus Dataset.update and Dataset.__setitem__) and Dataset/DataArray arithmetic (via Coordinates.merge).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/302/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 31.207ms · About: xarray-datasette