home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

11 rows where repo = 13221727 and user = 12929592 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: comments, created_at (date), updated_at (date), closed_at (date)

state 2

  • closed 10
  • open 1

type 1

  • issue 11

repo 1

  • xarray · 11 ✖
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
124300184 MDU6SXNzdWUxMjQzMDAxODQ= 690 hourofyear slharris 12929592 closed 0     4 2015-12-30T03:36:37Z 2022-05-12T21:22:37Z 2019-01-29T22:44:38Z NONE      

Is there a way to use 'hourofyear' in the same way 'dayofyear' works? I want the calculate the mean temperature value for a 2d dataset for each hour of the year based on 40 years of hourly data. I realise this might be a pandas question but if I receive an answer from a pandas forum I don't know if I would be able to work out how to apply it to an xray dataset.

Below is the code I would use to calculate 'dayofyear' but with the word 'hour' used to replace 'day'. Obviously it does not work! Any feedback will be greatly appreciated

ds=xray.open_mfdataset('/DATA/*TEMP.nc') ds_variable=ds['TEMP'] hourofyear=ds_variable.groupby('time.hourofyear').mean('time')

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/690/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
105519744 MDU6SXNzdWUxMDU1MTk3NDQ= 563 Upsampling - fill slharris 12929592 open 0     4 2015-09-09T05:05:27Z 2019-07-13T01:02:24Z   NONE      

Will the next version of xray have the capability to fill when upsampling?

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/563/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
    xarray 13221727 issue
98074194 MDU6SXNzdWU5ODA3NDE5NA== 501 xray methods using shapefile as mask? slharris 12929592 closed 0     17 2015-07-30T02:49:35Z 2019-06-18T10:49:53Z 2016-12-29T01:42:03Z NONE      

Can we set a shapefile as a mask for each netcdf file and run xray methods for values within the shapefile region?

for example if I want to create a timeseries of monthly mean temperature for 'mystate' from a netcdf file that contains data for the whole country:

filepath = r"DATA/temp/_/_temp.nc" shapefile = r"DATA/mystate.shp"

ds=xray.open_mfdataset(filepath) ds_variable=ds['temp'] monthlymean=ds_variable.resample('1MS', dim='time', how='mean') meanmonthlyofmystate=monthlymean.groupby('time').mean() #add somewhere here the shapefile meanmonthlyofmystate.to_pandas().plot()

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/501/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
88897697 MDU6SXNzdWU4ODg5NzY5Nw== 436 Examples combining multiple files slharris 12929592 closed 0     4 2015-06-17T03:12:45Z 2019-01-15T20:10:37Z 2019-01-15T20:10:37Z NONE      

Are you able to provide more examples of combining and working with multiple netcdf files. All of the examples appear to be working with the one netcdf file. I would like to create time series plots and spatial plots of anomalies of climate data for hundreds of netcdf files separated by month.
I must admit I am not very experienced but I think xray may be a better option than how I have been processing netcdf files in the past. Thanks

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/436/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
325933825 MDU6SXNzdWUzMjU5MzM4MjU= 2178 if 10% of ds meets criteria then count slharris 12929592 closed 0     2 2018-05-24T01:40:54Z 2018-05-24T03:03:17Z 2018-05-24T03:03:17Z NONE      

Can I please have help in calculating the total number of days that 10% of my dataset (on each day) is equal to or greater than a given value (e.g. 35). I will then use the where function and loop this through a number of regions and periods but first need help figuring out how to apply the 10% condition. Any help will be greatly appreciate. Sarah

open dataset

ds=xr.open_mfdataset('/DATA/WRF/sample10years///*FFDI.nc')

select period

fireseason=ds['FFDI'].sel(time=slice('2008-09-01', '2009-05-01'))

resample to daily max

dailymax=fireseason.resample(time='1D').max('time')

count number of days with dailymax>=35 if at least 10% of that day meets that criteria

dailycountTOTAL = (if 10% of dailymax >= 35).count()

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2178/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
238731491 MDU6SXNzdWUyMzg3MzE0OTE= 1466 Rasterio - Attribute Error slharris 12929592 closed 0     5 2017-06-27T04:01:08Z 2017-06-28T13:23:54Z 2017-06-27T04:35:18Z NONE      

I am able to open a tif using rasterio but when I try to open the same tif using rasterio in xarray I receive the following error message:

/Users/slburns/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/xarray/backends/rasterio_.pyc in open_rasterio(filename, chunks, cache, lock) 141 # CRS is a dict-like object specific to rasterio 142 # We convert it back to a PROJ4 string using rasterio itself --> 143 attrs['crs'] = riods.crs.to_string() 144 # Maybe we'd like to parse other attributes here (for later) 145

AttributeError: 'dict' object has no attribute 'to_string'

Is there some other step I should be doing first? Thanks

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1466/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
182168383 MDU6SXNzdWUxODIxNjgzODM= 1043 combine datasets and replace slharris 12929592 closed 0     2 2016-10-11T04:02:29Z 2016-10-12T02:25:58Z 2016-10-12T02:25:58Z NONE      

I would like to replace the time in one dataset with the time in another dataset. I have tried .concat(), .merge() and .update() with various errors. Details of the errors for each of those steps and the datasets are posted below. Any feedback on how I might resolve this will be greatly appreciated.

``` ds = xray.open_mfdataset('/DATA/WRF///*T_SFC.nc')

time=ds['time'].to_index()

time_utc = time.tz_localize('UTC') au_tz = pytz.timezone('Australia/Sydney')

convert UTC to local time

time_local = time_utc.tz_convert(au_tz) time_local=time_local.tz_localize(None)

local_series=time_local.to_series() local_df=local_series.to_frame() local_df.columns=['localtime']

local_ds=xray.Dataset.from_dataframe(local_df)

newconcat_ds=xray.concat(ds, local_ds['localtime']) #TypeError: can only concatenate xray Dataset and DataArray objects

newmerge_ds=ds.merge(local_ds) #InvalidIndexError: Reindexing only valid with uniquely valued Index objects

newupdate_ds=ds.update(ds['time'],local_ds['time']) #TypeError: unhashable type: 'DataArray' ```

I would like to replace the time in this dataset:

``` In[333]:ds Out[333]: <xray.Dataset> Dimensions: (latitude: 106, longitude: 193, time: 17520) Coordinates: * latitude (latitude) float32 -39.2 -39.1495 -39.099 -39.0486 -38.9981 ... * longitude (longitude) float32 140.8 140.848 140.896 140.944 140.992 ... * time (time) datetime64[ns] 2009-01-01 2009-01-01T01:00:00 ...

Data variables:

T_SFC (time, latitude, longitude) float64 13.83 13.86 13.89 13.92 ... Attributes: creationTime: 1431922712 creationTimeString: Sun May 17 21:18:32 PDT 2015 Conventions: COARDS ```

I would like to use the time in this dataset to replace the time in the first dataset:

``` In[334]: local_ds Out[334]: <xray.Dataset> Dimensions: (time: 17520) Coordinates: * time (time) datetime64[ns] 2009-01-01T11:00:00 2009-01-01T12:00:00 ...

Data variables:

localtime (time) datetime64[ns] 2009-01-01T11:00:00 2009-01-01T12:00:00 ... ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1043/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
181005061 MDU6SXNzdWUxODEwMDUwNjE= 1036 convert xarray dataset to local timezone slharris 12929592 closed 0     2 2016-10-04T21:07:02Z 2016-10-11T04:02:51Z 2016-10-11T04:02:51Z NONE      

Can I convert an xarray dataset to a different timezone? I have tried using similar steps that I would use in pandas to convert from UTC to 'Australia/Sydney'. I have pasted below some of these steps, along with a small section of the dataset I am working with. Any feedback will be greatly appreciated.

ds = xray.open_mfdataset('/DATA/WRF///*T_SFC.nc')

import pytz
ds_utc = ds['time'].tz_localize(pytz.UTC) au_tz = pytz.timezone('Australia/Sydney') ds_local = ds_utc.astimezone(au_tz)

<xray.Dataset> Dimensions: (latitude: 106, longitude: 193, time: 17520)

Coordinates: - latitude (latitude) float32 -39.2 -39.1495 -39.099 -39.0486 -38.9981 ... - longitude (longitude) float32 140.8 140.848 140.896 140.944 140.992 ... - time (time) datetime64[ns] 2009-01-01 2009-01-01T01:00:00 ...

Data variables: T_SFC (time, latitude, longitude) float64 13.83 13.86 13.89 13.92 ...

Attributes: creationTime: 1431922712 creationTimeString: Sun May 17 21:18:32 PDT 2015 Conventions: COARDS

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1036/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
180503054 MDU6SXNzdWUxODA1MDMwNTQ= 1025 Extract value for given time latitude and longitude slharris 12929592 closed 0     5 2016-10-02T09:04:32Z 2016-10-03T09:07:31Z 2016-10-03T09:07:31Z NONE      

I would like to loop through a list of dates, latitudes and longitudes and extract the maximum daily temperature from an hourly dataset of netcdf files. This appears more difficult than I thought it would be because I cannot seem to use the given latitude and longitude (even though I know the latitude and longitude matching the grid point).

ds = xray.open_mfdataset('/DATA/WRF///*Temp.nc') ds_variable=ds['Temp'] dailymax=ds_variable.resample('D', dim='time', how='max')

MaxTempattime=dailymax.sel(time='2015-02-01') MaxTempatpoint=MaxTempattime.isel(latitude=-39.1495, longitude=140.848) #this is where the problem occurs

print MaxTempatpoint.values

I see 'slice' can take a given latitude and longitude but I can't set a range for each of the thousands of points I need. Should I be using some type of index for latitude and longitude? Any feedback on the best approach for extracting a value at a given time, latitude and longitude will be greatly appreciated.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1025/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
121336727 MDU6SXNzdWUxMjEzMzY3Mjc= 673 resampling with missing data slharris 12929592 closed 0     2 2015-12-09T20:55:09Z 2015-12-13T00:27:43Z 2015-12-13T00:27:24Z NONE      

I regularly use resample and groupby to analyse a 40 year hourly 2D dataset with no problems. However, a new dataset that I am working with is missing some leap year days and the output is wrong with what seems like months have been swapped around. Is this because the number of days in the month is used to divide to get the mean? So my actual question is - how is the mean taken when using groupby or resample, does it count the number hours or days in the dataset and how does it deal with missing data?

Some of the steps I follow:

Python ds=xray.open_mfdataset(filepath) dsvariable=ds[variable] resampledaily=(dsvariable.resample('D', dim='time', how='max')) resamplemonthly=(resampledaily.resample('1MS', dim='time', how='mean')) monthly_ts=resamplemonthly.groupby('time').mean()

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/673/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
93416316 MDU6SXNzdWU5MzQxNjMxNg== 456 Percentiles slharris 12929592 closed 0     2 2015-07-07T01:48:40Z 2015-09-09T00:59:04Z 2015-09-09T00:59:04Z NONE      

Is there a command to calculate percentiles?

I am currently using xray to create max, min and mean plots for each season for 40 years of data. I have tried replacing where I use 'max' with 'percentile' or 'quantile' and then change ('time') to ('time, 90) but there is no attribute 'percentile' or 'quantile'

ds = xray.open_mfdataset('myfiles///*temp.nc') mytemp=ds['temp'] eachseasonmax=mytemp.groupby('time.season').max('time')

Also is this the correct place for these types of questions?

Thanks

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/456/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 322.893ms · About: xarray-datasette