issues
2 rows where state = "open" and user = 1117224 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at ▲ | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
481761508 | MDU6SXNzdWU0ODE3NjE1MDg= | 3223 | Feature request for multiple tolerance values when using nearest method and sel() | NicWayand 1117224 | open | 0 | 4 | 2019-08-16T19:53:31Z | 2024-04-29T23:21:04Z | NONE | ```python import xarray as xr import numpy as np import pandas as pd Create test datads = xr.Dataset() ds.coords['lon'] = np.arange(-120,-60) ds.coords['lat'] = np.arange(30,50) ds.coords['time'] = pd.date_range('2018-01-01','2018-01-30') ds['AirTemp'] = xr.DataArray(np.ones((ds.lat.size,ds.lon.size,ds.time.size)), dims=['lat','lon','time']) target_lat = [36.83] target_lon = [-110] target_time = [np.datetime64('2019-06-01')] Nearest pulls a date too far awayds.sel(lat=target_lat, lon=target_lon, time=target_time, method='nearest') Adding tolerance for lat long, but also applied to timeds.sel(lat=target_lat, lon=target_lon, time=target_time, method='nearest', tolerance=0.5) Ideally tolerance could accept a dictionary but currently failsds.sel(lat=target_lat, lon=target_lon, time=target_time, method='nearest', tolerance={'lat':0.5, 'lon':0.5, 'time':np.timedelta64(1,'D')}) ``` Expected OutputA dataset with nearest values to tolerances on each dim. Problem DescriptionI would like to add the ability of tolerance to accept a dictionary for multiple tolerance values for different dimensions. Before I try implementing it, I wanted to 1) check it doesn't already exist or someone isn't working on it, and 2) get suggestions for how to proceed. Output of
|
{ "url": "https://api.github.com/repos/pydata/xarray/issues/3223/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue | ||||||||
225536793 | MDU6SXNzdWUyMjU1MzY3OTM= | 1391 | Adding Example/Tutorial of importing data to Xarray (Merge/conact/etc) | NicWayand 1117224 | open | 0 | rabernat 1197350 | 11 | 2017-05-01T21:50:33Z | 2019-07-12T19:43:30Z | NONE | I love xarray for analysis but getting my data into xarray often takes a lot more time than I think it should. I am a hydrologist and very often hydro data is poorly stored/formatted, which means I need to do multiple merge/conact/combine_first operations etc. to get to a nice xarray dataset format. I think having more examples for importing different types of data would be helpful (for me and possibly others), instead of my current approach, which often entails trial and error. I can start off by providing an example of importing funky hydrology data that hopefully would be general enough for others to use. Maybe we can compile other examples as well. With the end goal of adding to the readthedocs. @klapo @jhamman |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/1391/reactions", "total_count": 7, "+1": 7, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xarray 13221727 | issue |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issues] ( [id] INTEGER PRIMARY KEY, [node_id] TEXT, [number] INTEGER, [title] TEXT, [user] INTEGER REFERENCES [users]([id]), [state] TEXT, [locked] INTEGER, [assignee] INTEGER REFERENCES [users]([id]), [milestone] INTEGER REFERENCES [milestones]([id]), [comments] INTEGER, [created_at] TEXT, [updated_at] TEXT, [closed_at] TEXT, [author_association] TEXT, [active_lock_reason] TEXT, [draft] INTEGER, [pull_request] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [state_reason] TEXT, [repo] INTEGER REFERENCES [repos]([id]), [type] TEXT ); CREATE INDEX [idx_issues_repo] ON [issues] ([repo]); CREATE INDEX [idx_issues_milestone] ON [issues] ([milestone]); CREATE INDEX [idx_issues_assignee] ON [issues] ([assignee]); CREATE INDEX [idx_issues_user] ON [issues] ([user]);