home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

10 rows where state = "closed" and user = 12307589 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: title, comments, closed_at, created_at (date), updated_at (date), closed_at (date)

type 2

  • issue 9
  • pull 1

state 1

  • closed · 10 ✖

repo 1

  • xarray 10
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
148902850 MDU6SXNzdWUxNDg5MDI4NTA= 828 Attributes are currently kept when arrays are resampled, and not when datasets are resampled mcgibbon 12307589 closed 0     4 2016-04-16T23:43:05Z 2020-04-05T18:19:03Z 2016-04-20T07:13:47Z CONTRIBUTOR      

Because line 323 of groupby.py copies attributes from a DataArray to its resampling output (it shouldn't), attributes are kept in many cases when DataArrays are resampled (and not kept for similar cases when Datasets are resampled).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/828/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
168754274 MDU6SXNzdWUxNjg3NTQyNzQ= 929 Dataset creation requires tuple, list treated differently mcgibbon 12307589 closed 0     4 2016-08-01T22:11:15Z 2019-02-26T08:51:17Z 2019-02-26T08:51:17Z CONTRIBUTOR      

Take the Dataset creation example:

python In [35]: ds = xr.Dataset({'temperature': (['x', 'y', 'time'], temp), ....: 'precipitation': (['x', 'y', 'time'], precip)}, ....: coords={'lon': (['x', 'y'], lon), ....: 'lat': (['x', 'y'], lat), ....: 'time': pd.date_range('2014-09-06', periods=3), ....: 'reference_time': pd.Timestamp('2014-09-05')}) ....:

if the tuple (['x', 'y', 'time'], temp) is replaced with a list [['x', 'y', 'time'], temp], the behavior changes in very strange ways. The resulting Dataset will then have a coordinate variable temperature whose dimensions are ('temperature', 'x', 'y', 'time'). Printing temperature shows that the ['x', 'y', 'time'] part has been interpreted as data rather than metadata. It seems to be impossible to access the data in the resulting temperature coordinate by indexing.

This might be intentional (since one could actually want to pass in data that is stored as a list), but it may be better to do some sanity checking when a list is passed to figure out whether the list is data or as above. If no change is made, then this feature should probably be pointed out in the documentation.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/929/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
185441216 MDU6SXNzdWUxODU0NDEyMTY= 1062 Add remaining date units to conventions.py mcgibbon 12307589 closed 0     6 2016-10-26T16:14:44Z 2019-02-24T21:25:39Z 2019-02-24T21:25:39Z CONTRIBUTOR      

Currently _netcdf_to_numpy_timeunit in conventions.py (seemingly) artificially imposes that weeks, months, and years can't be used as time units, despite some of these being CF-compliant (months, years), and datetime64 supporting these units.

Are these possibly disabled because of the way Udunits defines these units?

From CF conventions:

We recommend that the unit year be used with caution. The Udunits package defines a year to be exactly 365.242198781 days (the interval between 2 successive passages of the sun through vernal equinox). It is not a calendar year. Udunits includes the following definitions for years: a common_year is 365 days, a leap_year is 366 days, a Julian_year is 365.25 days, and a Gregorian_year is 365.2425 days.

For similar reasons the unit month, which is defined in udunits.dat to be exactly year/12, should also be used with caution.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1062/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
149513700 MDU6SXNzdWUxNDk1MTM3MDA= 833 Coveralls is missing line-by-line report mcgibbon 12307589 closed 0     2 2016-04-19T16:31:40Z 2019-01-27T22:48:15Z 2019-01-27T22:48:15Z CONTRIBUTOR      

The report showing which lines are and aren't covered by tests is missing from coveralls (example). Others have also had this problem (example 1, 2).

Let me know if it's there and I'm just not finding it.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/833/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
325810810 MDU6SXNzdWUzMjU4MTA4MTA= 2176 Advice on unit-aware arithmetic mcgibbon 12307589 closed 0     9 2018-05-23T17:51:54Z 2018-05-25T18:11:56Z 2018-05-25T18:11:55Z CONTRIBUTOR      

This isn't really a bug report. In sympl we're using DataArrays that allow unit-aware operations using the 'units' attribute as the only persistent unit storage. We use pint as a backend to operate on unit strings, but this is never exposed to the user and could be swapped for another backend without much consequence.

Basically, we currently have this implemented as a subclass sympl.DataArray. @dopplershift recently introduced me to the accessor interface, and I've been thinking about whether to switch over to that way of extending DataArray.

The problem I have is that the new code that results from using an accessor is quite cumbersome. The issue lies in that we mainly use new implementations for arithmetic operations. So, for example, the following code:

dt = DataArray(timestep.total_seconds(), attrs={'units': 's'})
for key in tendencies_list[0].keys():
    return_state[key] = state[key] + dt * (
        1.5 * tendencies_list[-1][key] - 0.5 * tendencies_list[-2][key]
    )

instead becomes

dt = DataArray(timestep.total_seconds(), attrs={'units': 's'})
for key in tendencies_list[0].keys():
    return_state[key] = state[key].sympl.add(
        dt.sympl.multiply(
            tendencies_list[-1][key].sympl.multiply(1.5).sympl.subtract(
                tendencies_list[-2][key].sympl.multiply(0.5)
            )
        )
    )

This could be a little less cumbersome if we avoid a sympl namespace and instead add separate accessors for each method. At the least it reads naturally. However, there's a reason you don't generally recommend doing this.

dt = DataArray(timestep.total_seconds(), attrs={'units': 's'})
for key in tendencies_list[0].keys():
    return_state[key] = state[key].add(
        dt.multiply(
            tendencies_list[-1][key].multiply(1.5).subtract(
                tendencies_list[-2][key].multiply(0.5)
            )
        )
    )

I'm looking for advice on what is best for sympl to do here. Right now I'm leaning towards that we should use a subclass rather than an accessor - does this seem like an appropriate case to do so?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2176/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
307857984 MDU6SXNzdWUzMDc4NTc5ODQ= 2008 Cannot save netcdf files with non-standard calendars mcgibbon 12307589 closed 0     6 2018-03-23T00:11:29Z 2018-05-16T19:50:40Z 2018-05-16T19:50:40Z CONTRIBUTOR      

Code Sample, a copy-pastable example if possible

Using noleap.nc from the following zip: noleap.zip

python import xarray as xr ds = xr.open_dataset('noleap.nc') ds.to_netcdf('noleap_new.nc')

Problem description

A long traceback gets printed out (sorry, I can't copy it properly from my current machine) that ends in TypeError: float() argument must be a string or a number, not 'netcdftime._netcdftime.DatetimeNoLeap'

Expected Output

Obviously, we expect the file to save. If xarray can decode the times, it should be able to encode them.

Output of xr.show_versions()

AttributeError: module 'xarray' has no attribute 'show_versions'

I'm running xarray 0.9.1, numpy 1.12.1, pandas 0.21.0, and netCDF4 1.2.8.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2008/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
165160600 MDU6SXNzdWUxNjUxNjA2MDA= 898 Indexing by multiple arrays inconsistent with numpy mcgibbon 12307589 closed 0     1 2016-07-12T19:30:53Z 2016-07-31T23:10:38Z 2016-07-31T23:10:38Z CONTRIBUTOR      

When indexing an array with multiple 1d arrays of the same length, the behavior of DataArray is different from the behavior of numpy arrays. Particularly, a 2d array is returned instead of a 1d array.

``` In [1]: import xarray as xr

In [2]: import numpy as np

In [3]: a = np.random.randn(5, 5)

In [4]: print(a[range(5), range(5)]) [ 0.92539795 0.06337135 -0.02374713 -0.6795863 -1.98749572]

In [5]: a = xr.DataArray(a)

In [6]: print(a[range(5), range(5)]) <xarray.DataArray (dim_0: 5, dim_1: 5)> array([[ 0.92539795, 0.34007445, 0.44199176, 1.29499782, -0.92076652], [ 0.23939236, 0.06337135, 0.83446803, 0.58847174, -1.08886251], [ 1.35784349, -0.51613834, -0.02374713, 1.6610402 , 0.80005739], [-0.75571607, -1.67907855, 1.29851435, -0.6795863 , -2.47751013], [-0.05817197, -1.195133 , 0.43844213, 0.29625676, -1.98749572]]) Coordinates: * dim_0 (dim_0) int64 0 1 2 3 4 * dim_1 (dim_1) int64 0 1 2 3 4

In [7]: xr.version Out[7]: '0.7.2-6-g859ddc2' ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/898/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
160507093 MDU6SXNzdWUxNjA1MDcwOTM= 885 Docstring for Dataset.drop needs revision mcgibbon 12307589 closed 0     1 2016-06-15T19:44:21Z 2016-06-16T00:45:46Z 2016-06-16T00:45:36Z CONTRIBUTOR      

The docstring for Dataset.drop indicates that the first argument, labels, should be a string indicating names of variables or index labels to drop. I'm not sure, but I'm guessing it either takes in several strings (in which case the docstring should say *labels), or it takes in either a string or an iterable (in which case the argument type should reflect this).

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/885/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
148903579 MDExOlB1bGxSZXF1ZXN0NjY3NTM3MzY= 829 keep_attrs for Dataset.resample and DataArray.resample mcgibbon 12307589 closed 0     7 2016-04-16T23:59:35Z 2016-04-19T16:06:03Z 2016-04-19T16:02:32Z CONTRIBUTOR   0 pydata/xarray/pulls/829

Closes #825 and #828

This update might break some scripts, because of bugs that existed in the code before. resample was inconsistent as to when it would and would not keep attributes. In particular, Dataset resampling usually threw out attributes, but not when how='first' or how='last', and DataArray preserved attributes for everything I tested ('first', 'last', and 'mean').

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/829/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 pull
148765426 MDU6SXNzdWUxNDg3NjU0MjY= 825 keep_attrs for Dataset.resample and DataArray.resample mcgibbon 12307589 closed 0     10 2016-04-15T20:46:01Z 2016-04-19T16:02:32Z 2016-04-19T16:02:32Z CONTRIBUTOR      

Currently there is no option for preserving attributes when resampling a Dataset or DataArray. Could there be a keep_attrs keyword argument for these methods?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/825/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 1600.655ms · About: xarray-datasette