issue_comments
15 rows where author_association = "CONTRIBUTOR" and user = 29051639 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
user 1
- AyrtonB · 15 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
966198058 | https://github.com/pydata/xarray/issues/1068#issuecomment-966198058 | https://api.github.com/repos/pydata/xarray/issues/1068 | IC_kwDOAMm_X845lwMq | AyrtonB 29051639 | 2021-11-11T10:46:16Z | 2021-11-11T10:46:16Z | CONTRIBUTOR | Unfortunately not @zjans |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Use xarray.open_dataset() for password-protected Opendap files 186169975 | |
864477138 | https://github.com/pydata/xarray/issues/1068#issuecomment-864477138 | https://api.github.com/repos/pydata/xarray/issues/1068 | MDEyOklzc3VlQ29tbWVudDg2NDQ3NzEzOA== | AyrtonB 29051639 | 2021-06-19T23:51:09Z | 2021-06-19T23:51:09Z | CONTRIBUTOR | I'm also getting the same error when running I'm using pydap==3.2.2 and xarray==0.18.0, any help would be much appreciated! ```python import xarray as xr from pydap.client import open_url from pydap.cas.urs import setup_session username = "my_username" password= "my_password" session = setup_session(username, password, check_url=url) pydap_ds = open_url(url, session=session) store = xr.backends.PydapDataStore(pydap_ds) ds = xr.open_dataset(store) ``` ```html HTTPError: 302 Found <html><head> <title>302 Found</title> </head><body>FoundThe document has moved here. </body></html>``` |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Use xarray.open_dataset() for password-protected Opendap files 186169975 | |
797555413 | https://github.com/pydata/xarray/pull/4659#issuecomment-797555413 | https://api.github.com/repos/pydata/xarray/issues/4659 | MDEyOklzc3VlQ29tbWVudDc5NzU1NTQxMw== | AyrtonB 29051639 | 2021-03-12T15:17:16Z | 2021-03-12T15:17:16Z | CONTRIBUTOR | From what I can gather there are more serious back-end considerations needed before this can be progressed. Personally, I've been monkey-patching this code in which has solved my particular use-case, hopefully it's helpful for yours. ```python import xarray as xr import pandas as pd import numpy as np import dask.dataframe as dd from dask.distributed import Client import numcodecs from types import ModuleType from datetime import timedelta from dask.dataframe.core import DataFrame as ddf from numbers import Number from typing import Any, Union, Sequence, Tuple, Mapping, Hashable, Dict, Optional, Set from xarray.core import dtypes, groupby, rolling, resample, weighted, utils# from xarray.core.accessor_dt import CombinedDatetimelikeAccessor from xarray.core.variable import Variable, IndexVariable from xarray.core.merge import PANDAS_TYPES from xarray.core.variable import NON_NUMPY_SUPPORTED_ARRAY_TYPES, IS_NEP18_ACTIVE, _maybe_wrap_data, _possibly_convert_objects from xarray.core.dataarray import _check_data_shape, _infer_coords_and_dims, _extract_indexes_from_coords from xarray.core.common import ImplementsDatasetReduce, DataWithCoords def as_compatible_data(data, fastpath=False): """Prepare and wrap data to put in a Variable. - If data does not have the necessary attributes, convert it to ndarray. - If data has dtype=datetime64, ensure that it has ns precision. If it's a pandas.Timestamp, convert it to datetime64. - If data is already a pandas or xarray object (other than an Index), just use the values. Finally, wrap it up with an adapter if necessary. """ if fastpath and getattr(data, "ndim", 0) > 0: # can't use fastpath (yet) for scalars return _maybe_wrap_data(data)
xr.core.variable.as_compatible_data = as_compatible_data class DataArray(xr.core.dataarray.DataArray):
xr.core.dataarray.DataArray = DataArray xr.DataArray = DataArray def _maybe_chunk( name, var, chunks=None, token=None, lock=None, name_prefix="xarray-", overwrite_encoded_chunks=False, ): from dask.base import tokenize
class Dataset(xr.Dataset): """A multi-dimensional, in memory, array database.
xr.core.dataarray.Dataset = Dataset xr.Dataset = Dataset ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
740032261 | https://github.com/pydata/xarray/pull/4659#issuecomment-740032261 | https://api.github.com/repos/pydata/xarray/issues/4659 | MDEyOklzc3VlQ29tbWVudDc0MDAzMjI2MQ== | AyrtonB 29051639 | 2020-12-07T16:36:36Z | 2020-12-07T16:36:36Z | CONTRIBUTOR | I've added |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
740020080 | https://github.com/pydata/xarray/pull/4659#issuecomment-740020080 | https://api.github.com/repos/pydata/xarray/issues/4659 | MDEyOklzc3VlQ29tbWVudDc0MDAyMDA4MA== | AyrtonB 29051639 | 2020-12-07T16:17:25Z | 2020-12-07T16:17:25Z | CONTRIBUTOR | That makes sense, thanks @keewis |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
740002632 | https://github.com/pydata/xarray/pull/4659#issuecomment-740002632 | https://api.github.com/repos/pydata/xarray/issues/4659 | MDEyOklzc3VlQ29tbWVudDc0MDAwMjYzMg== | AyrtonB 29051639 | 2020-12-07T15:49:00Z | 2020-12-07T15:49:00Z | CONTRIBUTOR | Thanks, yes I need to load the library for type-hinting and type checks. When you say |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
739991914 | https://github.com/pydata/xarray/issues/3929#issuecomment-739991914 | https://api.github.com/repos/pydata/xarray/issues/3929 | MDEyOklzc3VlQ29tbWVudDczOTk5MTkxNA== | AyrtonB 29051639 | 2020-12-07T15:32:01Z | 2020-12-07T15:32:01Z | CONTRIBUTOR | I've added a PR for the new feature but it's currently failing tests as the test-suite doesn't seem to have Dask installed. Any advice on how to get this PR prepared for merging would be appreciated. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature request xarray.Dataset.from_dask_dataframe 593029940 | |
739988806 | https://github.com/pydata/xarray/pull/4659#issuecomment-739988806 | https://api.github.com/repos/pydata/xarray/issues/4659 | MDEyOklzc3VlQ29tbWVudDczOTk4ODgwNg== | AyrtonB 29051639 | 2020-12-07T15:27:10Z | 2020-12-07T15:27:10Z | CONTRIBUTOR | During testing I'm currently encountering the issue: How should testing of dask DataArrays be approached? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
739904265 | https://github.com/pydata/xarray/issues/3929#issuecomment-739904265 | https://api.github.com/repos/pydata/xarray/issues/3929 | MDEyOklzc3VlQ29tbWVudDczOTkwNDI2NQ== | AyrtonB 29051639 | 2020-12-07T13:01:57Z | 2020-12-07T13:02:20Z | CONTRIBUTOR | One of the things I was hoping to include in my approach is the preservation of the column dimension names, however if I was to use Thanks for the advice @shoyer, I reached a similar opinion and so have been working on the dim compute route. The issue is that a Dask array's shape uses np.nan for uncomputed dimensions, rather than leaving a delayed object like the Dask dataframe's shape. I looked into returning the dask dataframe rather than dask array but this didn't feel like it fit with the rest of the code and produced another issue as dask dataframes don't have a dtype attribute. I'll continue to look into alternatives. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature request xarray.Dataset.from_dask_dataframe 593029940 | |
739338154 | https://github.com/pydata/xarray/pull/4653#issuecomment-739338154 | https://api.github.com/repos/pydata/xarray/issues/4653 | MDEyOklzc3VlQ29tbWVudDczOTMzODE1NA== | AyrtonB 29051639 | 2020-12-05T19:18:10Z | 2020-12-05T19:18:10Z | CONTRIBUTOR | Nothing like a transient error to keep everyone on their toes. Thanks again! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
corrected a minor spelling mistake 757751542 | |
739336219 | https://github.com/pydata/xarray/pull/4653#issuecomment-739336219 | https://api.github.com/repos/pydata/xarray/issues/4653 | MDEyOklzc3VlQ29tbWVudDczOTMzNjIxOQ== | AyrtonB 29051639 | 2020-12-05T19:10:27Z | 2020-12-05T19:10:27Z | CONTRIBUTOR | Thanks @dcherian, out of interest what would I have had to have done to remove that test failure? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
corrected a minor spelling mistake 757751542 | |
739334281 | https://github.com/pydata/xarray/issues/3929#issuecomment-739334281 | https://api.github.com/repos/pydata/xarray/issues/3929 | MDEyOklzc3VlQ29tbWVudDczOTMzNDI4MQ== | AyrtonB 29051639 | 2020-12-05T18:52:49Z | 2020-12-05T18:52:49Z | CONTRIBUTOR | For context this is the function I'm using to convert the Dask DataFrame to a DataArray. ```python def from_dask_dataframe(df, index_name=None, columns_name=None): def extract_dim_name(df, dim='index'): if getattr(df, dim).name is None: getattr(df, dim).name = dim
df.index.name = 'datetime' df.columns.name = 'fueltypes' da = from_dask_dataframe(df) ``` I'm also conscious that my question is different to @raybellwaves' as they were asking about Dataset creation and I'm interested in creating a DataArray which requires different functionality. I'm assuming this is the correct place to post though as @keewis closed my issue and linked to this one. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature request xarray.Dataset.from_dask_dataframe 593029940 | |
739330830 | https://github.com/pydata/xarray/issues/4650#issuecomment-739330830 | https://api.github.com/repos/pydata/xarray/issues/4650 | MDEyOklzc3VlQ29tbWVudDczOTMzMDgzMA== | AyrtonB 29051639 | 2020-12-05T18:23:10Z | 2020-12-05T18:23:10Z | CONTRIBUTOR | Have started to implement this but will continue the discussion in 3929 |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Ability to Pass Dask Arrays as `data` in DataArray Creation 757660307 | |
739330558 | https://github.com/pydata/xarray/issues/3929#issuecomment-739330558 | https://api.github.com/repos/pydata/xarray/issues/3929 | MDEyOklzc3VlQ29tbWVudDczOTMzMDU1OA== | AyrtonB 29051639 | 2020-12-05T18:20:33Z | 2020-12-05T18:20:33Z | CONTRIBUTOR | I've been trying to implement this and have managed to create a The modifications I've made so far are adding the following above line 400 in dataarray.py: ```python shape = tuple([ dim_size.compute() if hasattr(dim_size, 'compute') else dim_size for dim_size in data.shape ]) coords = tuple([ coord.compute() if hasattr(coord, 'compute') else coord for coord in coords ]) ``` and on line 403 by replacing The issue I have is that when I then want to use the DataArray and do something like ValueError Traceback (most recent call last) <ipython-input-23-5d739a721388> in <module> ----> 1 da.sel(datetime='2020') ~\anaconda3\envs\DataHub\lib\site-packages\xarray\core\dataarray.py in sel(self, indexers, method, tolerance, drop, **indexers_kwargs) 1219 1220 """ -> 1221 ds = self._to_temp_dataset().sel( 1222 indexers=indexers, 1223 drop=drop, ~\anaconda3\envs\DataHub\lib\site-packages\xarray\core\dataarray.py in _to_temp_dataset(self) 499 500 def _to_temp_dataset(self) -> Dataset: --> 501 return self._to_dataset_whole(name=_THIS_ARRAY, shallow_copy=False) 502 503 def _from_temp_dataset( ~\anaconda3\envs\DataHub\lib\site-packages\xarray\core\dataarray.py in _to_dataset_whole(self, name, shallow_copy) 551 552 coord_names = set(self._coords) --> 553 dataset = Dataset._construct_direct(variables, coord_names, indexes=indexes) 554 return dataset 555 ~\anaconda3\envs\DataHub\lib\site-packages\xarray\core\dataset.py in _construct_direct(cls, variables, coord_names, dims, attrs, indexes, encoding, file_obj) 959 """ 960 if dims is None: --> 961 dims = calculate_dimensions(variables) 962 obj = object.new(cls) 963 obj._variables = variables ~\anaconda3\envs\DataHub\lib\site-packages\xarray\core\dataset.py in calculate_dimensions(variables) 207 "conflicting sizes for dimension %r: " 208 "length %s on %r and length %s on %r" --> 209 % (dim, size, k, dims[dim], last_used[dim]) 210 ) 211 return dims ValueError: conflicting sizes for dimension 'datetime': length nan on <this-array> and length 90386 on 'datetime' ``` This occurs due to the construction of I'm assuming there's an alternative way to construct |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Feature request xarray.Dataset.from_dask_dataframe 593029940 | |
739322106 | https://github.com/pydata/xarray/issues/4650#issuecomment-739322106 | https://api.github.com/repos/pydata/xarray/issues/4650 | MDEyOklzc3VlQ29tbWVudDczOTMyMjEwNg== | AyrtonB 29051639 | 2020-12-05T17:09:23Z | 2020-12-05T17:09:23Z | CONTRIBUTOR | Thanks, I saw dask/dask#6058 but missed #3929. If I'm understanding you correctly there should be no problem passing a dask array for the data parameters its just the dims/coords. If the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Ability to Pass Dask Arrays as `data` in DataArray Creation 757660307 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 5