home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

32 rows where milestone = 799013, state = "closed" and type = "pull" sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: user, comments, author_association, body, created_at (date), updated_at (date), closed_at (date)

type 1

  • pull · 32 ✖

state 1

  • closed · 32 ✖

repo 1

  • xarray 32
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
57577216 MDExOlB1bGxSZXF1ZXN0MjkyNTA3MjA= 321 Automatic label-based alignment for math and Dataset constructor shoyer 1217238 closed 0   0.4 799013 0 2015-02-13T09:31:43Z 2015-03-03T06:24:02Z 2015-02-13T22:19:29Z MEMBER   0 pydata/xarray/pulls/321

Fixes #186.

This will be a major breaking change for v0.4. For example, we can now do things like this:

``` In [5]: x = xray.DataArray(range(5), dims='x')

In [6]: x Out[6]: <xray.DataArray (x: 5)> array([0, 1, 2, 3, 4]) Coordinates: * x (x) int64 0 1 2 3 4

In [7]: x[:4] + x[1:] Out[7]: <xray.DataArray (x: 3)> array([2, 4, 6]) Coordinates: * x (x) int64 1 2 3 ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/321/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
59599124 MDExOlB1bGxSZXF1ZXN0MzAzNDg3NTA= 356 Documentation updates shoyer 1217238 closed 0   0.4 799013 0 2015-03-03T06:01:03Z 2015-03-03T06:02:57Z 2015-03-03T06:02:56Z MEMBER   0 pydata/xarray/pulls/356

Fixes #343

(among other small changes)

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/356/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
59592828 MDExOlB1bGxSZXF1ZXN0MzAzNDUzODA= 355 Partial fix for netCDF4 datetime issues shoyer 1217238 closed 0   0.4 799013 0 2015-03-03T04:11:51Z 2015-03-03T05:02:54Z 2015-03-03T05:02:52Z MEMBER   0 pydata/xarray/pulls/355

xref #340

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/355/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
59548752 MDExOlB1bGxSZXF1ZXN0MzAzMTk2Mzc= 351 Switch the name of datetime components from 'time.month' to 'month' shoyer 1217238 closed 0   0.4 799013 0 2015-03-02T20:55:24Z 2015-03-02T23:20:09Z 2015-03-02T23:20:07Z MEMBER   0 pydata/xarray/pulls/351

Fixes #345

This lets you write things like:

counts = time.groupby('time.month').count() counts.sel(month=2)

instead of the previously valid

counts.sel(**{'time.month': 2})

which is much more awkward. Note that this breaks existing code which relied on the old usage.

CC @jhamman

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/351/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
59529630 MDExOlB1bGxSZXF1ZXN0MzAzMTA1Mjc= 350 Fix Dataset repr with netcdf4 datetime objects shoyer 1217238 closed 0   0.4 799013 0 2015-03-02T19:06:28Z 2015-03-02T19:25:01Z 2015-03-02T19:25:00Z MEMBER   0 pydata/xarray/pulls/350

Fixes #347

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/350/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
59431613 MDExOlB1bGxSZXF1ZXN0MzAyNTQ5NjE= 348 Fix Dataset aggregate boolean shoyer 1217238 closed 0   0.4 799013 0 2015-03-02T02:26:27Z 2015-03-02T18:14:12Z 2015-03-02T18:14:11Z MEMBER   0 pydata/xarray/pulls/348

Fixes #342

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/348/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
59420145 MDExOlB1bGxSZXF1ZXN0MzAyNDk4NzU= 346 Fix bug where Coordinates could turn Variable objects in Dataset constructor shoyer 1217238 closed 0   0.4 799013 0 2015-03-01T22:08:36Z 2015-03-01T23:57:58Z 2015-03-01T23:57:55Z MEMBER   0 pydata/xarray/pulls/346

This manifested itself in some variables not being written to netCDF files, because they were determined to be trivial indexes (hence that logic was also updated to be slightly less questionable).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/346/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
59032389 MDExOlB1bGxSZXF1ZXN0MzAwNTY0MDk= 337 Cleanup (mostly documentation) shoyer 1217238 closed 0   0.4 799013 3 2015-02-26T07:40:01Z 2015-02-27T22:22:47Z 2015-02-26T07:43:37Z MEMBER   0 pydata/xarray/pulls/337
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/337/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
59033971 MDExOlB1bGxSZXF1ZXN0MzAwNTcxNTI= 338 Truncate long attributes when printing datasets shoyer 1217238 closed 0   0.4 799013 0 2015-02-26T07:57:40Z 2015-02-26T08:06:17Z 2015-02-26T08:06:05Z MEMBER   0 pydata/xarray/pulls/338

Only the first 500 characters are now shown, e.g.,

In [2]: xray.Dataset(attrs={'foo': 'bar' * 1000}) Out[2]: <xray.Dataset> Dimensions: () Coordinates: *empty* Data variables: *empty* Attributes: foo: barbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarb arbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarba rbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbar barbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarb arbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarba rbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbar barbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarb arbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarbarba...

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/338/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
58682523 MDExOlB1bGxSZXF1ZXN0Mjk4NjQ5NzA= 334 Fix bug associated with reading / writing of mixed endian data. akleeman 514053 closed 0   0.4 799013 1 2015-02-24T01:57:43Z 2015-02-26T04:45:18Z 2015-02-26T04:45:18Z CONTRIBUTOR   0 pydata/xarray/pulls/334

The right solution to this is to figure out how to successfully round trip endian-ness, but that seems to be a deeper issue inside netCDF4 (https://github.com/Unidata/netcdf4-python/issues/346)

Instead we force all data to little endian before netCDF4 write.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/334/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
58854176 MDExOlB1bGxSZXF1ZXN0Mjk5NjI4MzY= 335 Add broadcast_equals method to Dataset and DataArray shoyer 1217238 closed 0   0.4 799013 0 2015-02-25T05:51:46Z 2015-02-26T04:35:52Z 2015-02-26T04:35:49Z MEMBER   0 pydata/xarray/pulls/335
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/335/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
58857388 MDExOlB1bGxSZXF1ZXN0Mjk5NjQyMTE= 336 Add Dataset.drop and DataArray.drop shoyer 1217238 closed 0   0.4 799013 0 2015-02-25T06:35:18Z 2015-02-25T22:01:49Z 2015-02-25T22:01:49Z MEMBER   0 pydata/xarray/pulls/336

These are convenient shortcuts for removing variables or index labels from an xray object.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/336/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
58537752 MDExOlB1bGxSZXF1ZXN0Mjk3ODM4Njc= 330 Improved error handling for datetime decoding errors shoyer 1217238 closed 0   0.4 799013 0 2015-02-23T03:08:40Z 2015-02-25T22:00:36Z 2015-02-23T03:11:03Z MEMBER   0 pydata/xarray/pulls/330

Fixes #323

We now get an error message with a lovely traceback when opening a dataset with invalid time units. For example:

Traceback (most recent call last): File "/Users/shoyer/dev/xray/xray/test/test_conventions.py", line 340, in test_invalid_units_raises_eagerly decode_cf(ds) File "/Users/shoyer/dev/xray/xray/conventions.py", line 775, in decode_cf decode_coords) File "/Users/shoyer/dev/xray/xray/conventions.py", line 716, in decode_cf_variables decode_times=decode_times) File "/Users/shoyer/dev/xray/xray/conventions.py", line 676, in decode_cf_variable data = DecodedCFDatetimeArray(data, units, calendar) File "/Users/shoyer/dev/xray/xray/conventions.py", line 340, in __init__ raise ValueError(msg) ValueError: unable to decode time units 'foobar since 123' with the default calendar. Try opening your dataset with decode_times=False. Full traceback: Traceback (most recent call last): File "/Users/shoyer/dev/xray/xray/conventions.py", line 331, in __init__ decode_cf_datetime(example_value, units, calendar) File "/Users/shoyer/dev/xray/xray/conventions.py", line 130, in decode_cf_datetime delta = _netcdf_to_numpy_timeunit(delta) File "/Users/shoyer/dev/xray/xray/conventions.py", line 72, in _netcdf_to_numpy_timeunit return {'seconds': 's', 'minutes': 'm', 'hours': 'h', 'days': 'D'}[units] KeyError: 'foobars'

Also includes a fix for a datetime decoding issue reported on the mailing list: https://groups.google.com/forum/#!topic/xray-dev/Sscsw5dQAqQ

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/330/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
58679397 MDExOlB1bGxSZXF1ZXN0Mjk4NjMyMjU= 333 Unify netCDF4 and scipy backends in the public API shoyer 1217238 closed 0   0.4 799013 0 2015-02-24T01:20:01Z 2015-02-25T06:21:03Z 2015-02-25T06:21:01Z MEMBER   0 pydata/xarray/pulls/333

Fixes #273 and half of #272

To serialize a dataset to a string/bytes, simply use ds.to_netcdf(). This behavior copies DataFrame.to_csv() from pandas. The legacy dump and dumps methods are deprecated.

My main concern is that the default "format" option is depends on what dependencies the user has installed or if they are saving a file. That seems non-ideal, but may perhaps be the most pragmatic choice given the limitations of the netCDF4 format.

This change also adds: - Support for writing datasets to a particular NETCDF4 group - Support for opening netCDF3 files from disk even without netCDF4-python if scipy is installed.

CC @akleeman

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/333/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
58635715 MDExOlB1bGxSZXF1ZXN0Mjk4Mzc2ODU= 332 Update time.season to use text labels like 'DJF' shoyer 1217238 closed 0   0.4 799013 0 2015-02-23T19:31:58Z 2015-02-23T19:43:41Z 2015-02-23T19:43:39Z MEMBER   0 pydata/xarray/pulls/332

Previously, I used numbers 1 through 4 for the sake of consistency with pandas, but such labels really were impossible to keep track of.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/332/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
58545451 MDExOlB1bGxSZXF1ZXN0Mjk3ODc1NzM= 331 Documentation updates anticipating v0.4 shoyer 1217238 closed 0   0.4 799013 0 2015-02-23T05:51:42Z 2015-02-23T06:18:39Z 2015-02-23T06:18:35Z MEMBER   0 pydata/xarray/pulls/331
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/331/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
53659919 MDExOlB1bGxSZXF1ZXN0MjY5NzM5NTc= 306 Fix coercion of numeric strings to objects shoyer 1217238 closed 0   0.4 799013 0 2015-01-07T17:45:23Z 2015-02-23T06:09:10Z 2015-01-07T18:14:31Z MEMBER   0 pydata/xarray/pulls/306

Fixes #305

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/306/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
58474512 MDExOlB1bGxSZXF1ZXN0Mjk3NTk4MTk= 329 Dataset.apply works if func returns like-shaped ndarrays shoyer 1217238 closed 0   0.4 799013 0 2015-02-21T20:54:00Z 2015-02-23T00:35:25Z 2015-02-23T00:35:23Z MEMBER   0 pydata/xarray/pulls/329

This extends the recent change by @IamJeffG (#327).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/329/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
58307190 MDExOlB1bGxSZXF1ZXN0Mjk2NzEzMjg= 327 Cleanly apply generic ndarrays to DataArray.groupby IamJeffG 2002703 closed 0   0.4 799013 1 2015-02-20T03:47:15Z 2015-02-20T04:41:10Z 2015-02-20T04:41:08Z CONTRIBUTOR   0 pydata/xarray/pulls/327

This is special cased for np.ndarrays: applying to DataArrays is not only inefficient but would also be wrong if the applied function wanted to change metadata.

Fixes #326

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/327/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
58182276 MDExOlB1bGxSZXF1ZXN0Mjk1OTUyMjU= 325 Rename Dataset.vars -> data_vars and remove deprecated aliases shoyer 1217238 closed 0   0.4 799013 0 2015-02-19T09:01:45Z 2015-02-19T19:31:17Z 2015-02-19T19:31:11Z MEMBER   0 pydata/xarray/pulls/325

Fixes #293

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/325/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
58024857 MDExOlB1bGxSZXF1ZXN0Mjk1MDA2ODc= 322 Support reindexing with an optional fill method shoyer 1217238 closed 0   0.4 799013 0 2015-02-18T04:32:47Z 2015-02-18T04:42:00Z 2015-02-18T04:41:59Z MEMBER   0 pydata/xarray/pulls/322

e.g., pad, backfill or nearest

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/322/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
57217507 MDExOlB1bGxSZXF1ZXN0MjkwMzMyMTA= 318 Fix DataArray.loc indexing with Ellipsis: da.loc[...] shoyer 1217238 closed 0   0.4 799013 0 2015-02-10T18:46:37Z 2015-02-10T18:59:32Z 2015-02-10T18:59:31Z MEMBER   0 pydata/xarray/pulls/318
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/318/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
57198686 MDExOlB1bGxSZXF1ZXN0MjkwMjE2Mzg= 317 Fall back to netCDF4 if pandas can’t parse a date sjpfenninger 141709 closed 0   0.4 799013 1 2015-02-10T16:31:34Z 2015-02-10T18:37:35Z 2015-02-10T18:37:32Z CONTRIBUTOR   0 pydata/xarray/pulls/317

Addresses #316

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/317/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
56767996 MDExOlB1bGxSZXF1ZXN0Mjg3ODM5OTc= 315 Bug fix for multidimensional reindex edge case shoyer 1217238 closed 0   0.4 799013 0 2015-02-06T04:09:09Z 2015-02-06T04:10:23Z 2015-02-06T04:10:21Z MEMBER   0 pydata/xarray/pulls/315
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/315/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
56489618 MDExOlB1bGxSZXF1ZXN0Mjg2MTc5MTQ= 313 Fix decoding missing coordinates shoyer 1217238 closed 0   0.4 799013 0 2015-02-04T07:19:01Z 2015-02-04T07:21:03Z 2015-02-04T07:21:01Z MEMBER   0 pydata/xarray/pulls/313

Fixes #308

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/313/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
56479006 MDExOlB1bGxSZXF1ZXN0Mjg2MTI3NzM= 312 BUG: Fix slicing with negative step size shoyer 1217238 closed 0   0.4 799013 0 2015-02-04T04:32:07Z 2015-02-04T04:34:46Z 2015-02-04T04:34:39Z MEMBER   0 pydata/xarray/pulls/312

As identified here: https://github.com/perrette/dimarray/commit/ad4ab4d049f49881b28120d276337b2cab5e4061

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/312/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
54966501 MDExOlB1bGxSZXF1ZXN0Mjc3Mjg5MzI= 311 Bug fix for DataArray.to_dataframe with coords with different dimensions shoyer 1217238 closed 0   0.4 799013 0 2015-01-21T01:40:06Z 2015-01-21T01:44:29Z 2015-01-21T01:44:28Z MEMBER   0 pydata/xarray/pulls/311
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/311/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
54391570 MDExOlB1bGxSZXF1ZXN0MjczOTI5OTU= 310 More robust CF datetime unit parsing akleeman 514053 closed 0 shoyer 1217238 0.4 799013 1 2015-01-14T23:19:07Z 2015-01-14T23:36:34Z 2015-01-14T23:35:27Z CONTRIBUTOR   0 pydata/xarray/pulls/310

This makes it possible to read datasets that don't follow CF datetime conventions perfectly, such as the following example which (surprisingly) comes from NCEP/NCAR (you'd think they would follow CF!)

``` ds = xray.open_dataset('http://thredds.ucar.edu/thredds/dodsC/grib/NCEP/GEFS/Global_1p0deg_Ensemble/members/GEFS_Global_1p0deg_Ensemble_20150114_1200.grib2/GC') print ds['time'].encoding['units']

u'Hour since 2015-01-14T12:00:00Z' ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/310/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
54012349 MDExOlB1bGxSZXF1ZXN0MjcxNzEyMDU= 309 Fix typos in docs eriknw 2058401 closed 0   0.4 799013 1 2015-01-12T01:00:09Z 2015-01-12T01:44:11Z 2015-01-12T01:43:43Z CONTRIBUTOR   0 pydata/xarray/pulls/309
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/309/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
53719931 MDExOlB1bGxSZXF1ZXN0MjcwMTAyODg= 307 Skip NA in groupby groups shoyer 1217238 closed 0   0.4 799013 0 2015-01-08T06:40:17Z 2015-01-08T06:51:12Z 2015-01-08T06:51:10Z MEMBER   0 pydata/xarray/pulls/307

This makes xray consistent with the behavior of pandas.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/307/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
53314251 MDExOlB1bGxSZXF1ZXN0MjY3ODY1NzI= 304 Switch to use nan-skipping aggregation functions by default and add .median() method shoyer 1217238 closed 0   0.4 799013 0 2015-01-03T20:19:26Z 2015-01-04T16:05:30Z 2015-01-04T16:05:28Z MEMBER   0 pydata/xarray/pulls/304

TODO: - ~~update documentation~~ (I'll do this later) - ~~update minimum required numpy version to 1.9? (so we can use np.nanmedian)~~ (added an informative error message for median)

fixes #209 xref #130

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/304/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
53064234 MDExOlB1bGxSZXF1ZXN0MjY2NTczMzY= 302 Variables no longer conflict if they are broadcast equal and rather are promoted to use common dimensions shoyer 1217238 closed 0   0.4 799013 0 2014-12-29T19:19:42Z 2014-12-29T19:53:14Z 2014-12-29T19:52:57Z MEMBER   0 pydata/xarray/pulls/302

Fixes #243.

The idea here is that variables should not conflict if they are equal after being broadcast against each other; rather variables should be promoted to the common dimensions. This should resolve a number of annoyances causes by mixing scalar and non-scalar variables.

This PR includes fixes for concat, Dataset.merge (and thus Dataset.update and Dataset.__setitem__) and Dataset/DataArray arithmetic (via Coordinates.merge).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/302/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 29.96ms · About: xarray-datasette