home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

4 rows where "closed_at" is on date 2019-02-27 and type = "issue" sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

type 1

  • issue · 4 ✖

state 1

  • closed 4

repo 1

  • xarray 4
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
148876551 MDU6SXNzdWUxNDg4NzY1NTE= 827 Issue with GFS time reference caiostringari 8363752 closed 0     7 2016-04-16T18:14:33Z 2022-01-12T14:48:24Z 2019-02-27T01:48:20Z NONE      

I am currently translating some old ferret code into python. However, when downloading GFS operational data, there was an issue...

When downloaded from ferret, the GFS file has the following time reference (using ncdump -h):

double TIME(TIME) ; TIME:units = "days since 0001-01-01 00:00:00" ; TIME:long_name = "time" ; TIME:time_origin = "01-JAN-0001 00:00:00" ; TIME:axis = "T" ; TIME:standard_name = "time" ;

When using xarray to access the openDAP server and writing to disk using ds.to_netcdf(), the file has this time reference.

double time(time) ; string time:grads_dim = "t" ; string time:grads_mapping = "linear" ; string time:grads_size = "81" ; string time:grads_min = "00z15apr2016" ; string time:grads_step = "3hr" ; string time:long_name = "time" ; string time:minimum = "00z15apr2016" ; string time:maximum = "00z25apr2016" ; time:resolution = 0.125f ; string time:units = "days since 2001-01-01" ; time:calendar = "proleptic_gregorian" ;

This is not really an issue while using the data inside python because the dates are translated correct. However, in my work flux, I need this file to be read for other models such as WW3. For instance, trying to read it from WW3, results in:

``` Processing data


       Time : 0015/03/15 00:00:00 UTC
              reading ....
              interpolating ....
              writing ....
       Time : 0015/03/15 03:00:00 UTC

```

Looking at the reference time, ferret gives TIME:time_origin = "01-JAN-0001 00:00:00" while xarray gives string time:units = "days since 2001-01-01". Well, there are 2000 years missing...

I tried to fix it using something like:

ds.coords['reference_time'] = pd.Timestamp('1-1-1')

But the reference time didn't really updated. Is there an easy way to fix the reference time to match what is in the NOAA's openDAP server ?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/827/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
137920337 MDU6SXNzdWUxMzc5MjAzMzc= 780 Index/dimension order not preserved when going from and to DataFrame samwisehawkins 4641789 closed 0     5 2016-03-02T16:17:55Z 2019-02-27T22:48:20Z 2019-02-27T22:48:20Z NONE      

If I convert DataFrame --> Dataset --> DataFrame, then the index ordering gets switched. Is this a bug or a feature?

``` ids = ['A','B'] heights = [1,2] times = pd.date_range('2010-01-01', '2010-01-02', freq='1D') names = ['id','height','time'] index = pd.MultiIndex.from_product([ids, heights, times], names=names) f1 = pd.DataFrame(index=index) # has ordering id, height time f1['V1'] = np.random.ranf(size=len(f1)) f1['V2'] = np.random.ranf(size=len(f1))

ds = xr.Dataset.from_dataframe(f1) # has ordering id, height, time f2 = ds.to_dataframe() # has ordering height, id, time ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/780/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
145807529 MDU6SXNzdWUxNDU4MDc1Mjk= 816 opendap and gzipped files swnesbitt 3288592 closed 0     4 2016-04-04T21:17:24Z 2019-02-27T19:48:20Z 2019-02-27T19:48:20Z NONE      

Found an issue with opening opendap files that are remotely gzipped - appears that the scipy netCDF backend is used to read gzipped netCDF, but that backend can't do openDAP apparently? Works with local files.

``` ipython print(ncfiles[0]) ascat=xr.open_dataset(ncfiles[0])

http://opendap.jpl.nasa.gov/opendap/OceanWinds/ascat/preview/L2/metop_a/12km/2013/027/ascat_20130127_004801_metopa_32553_eps_o_125_2101_ovw.l2.nc.gz

IOError Traceback (most recent call last) <ipython-input-35-c5425b003df3> in <module>() 1 print(ncfiles[0]) ----> 2 ascat=xr.open_dataset(ncfiles[0])

/data/keeling/a/snesbitt/anaconda2/lib/python2.7/site-packages/xarray/backends/api.pyc in open_dataset(filename_or_obj, group, decode_cf, mask_and_scale, decode_times, concat_characters, decode_coords, engine, chunks, lock, drop_variables) 197 'supported on Python 2.6') 198 try: --> 199 store = backends.ScipyDataStore(gzip.open(filename_or_obj)) 200 except TypeError as e: 201 # TODO: gzipped loading only works with NetCDF3 files.

/data/keeling/a/snesbitt/anaconda2/lib/python2.7/gzip.pyc in open(filename, mode, compresslevel) 32 33 """ ---> 34 return GzipFile(filename, mode, compresslevel) 35 36 class GzipFile(io.BufferedIOBase):

/data/keeling/a/snesbitt/anaconda2/lib/python2.7/gzip.pyc in init(self, filename, mode, compresslevel, fileobj, mtime) 92 mode += 'b' 93 if fileobj is None: ---> 94 fileobj = self.myfileobj = builtin.open(filename, mode or 'rb') 95 if filename is None: 96 # Issue #13781: os.fdopen() creates a fileobj with a bogus name

IOError: [Errno 2] No such file or directory: 'http://opendap.jpl.nasa.gov/opendap/OceanWinds/ascat/preview/L2/metop_a/12km/2013/027/ascat_20130127_004801_metopa_32553_eps_o_125_2101_ovw.l2.nc.gz' ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/816/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
146287030 MDU6SXNzdWUxNDYyODcwMzA= 819 N-D rolling forman 206773 closed 0     5 2016-04-06T11:42:42Z 2019-02-27T17:48:20Z 2019-02-27T17:48:20Z NONE      

Dear xarray Team,

We just discovered xarray and it seems to be a fantastic candidate to serve as a core library for our climate data toolbox we are about to implement. While investigating the API we recognized that the windows kwargs in

DataArray.rolling(min_periods=None, center=False, **windows)

is limited to a single dim=window_size entry. Are there any plans to make it rolling in N-D? This could be very useful for efficient gap filling, filtering or other methodologies that use grid cell neighbourhoods in multiple dimensions.

Actually, I also asked myself why the groupby and resample methods don't take an N-D dim argument. This would allow for performing not only a temporal resampling but also a spatial resampling in the lat/lon plane or even a spatio-temporal resampling (including up- and downsampling in either dim).

Anyway, thanks for xarray!

Regards Norman

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/819/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 405.671ms · About: xarray-datasette