issue_comments
18 rows where issue = 758606082 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: reactions, created_at (date), updated_at (date)
issue 1
- xr.DataArray.from_dask_dataframe feature · 18 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
1404008708 | https://github.com/pydata/xarray/pull/4659#issuecomment-1404008708 | https://api.github.com/repos/pydata/xarray/issues/4659 | IC_kwDOAMm_X85Tr3kE | dcherian 2448579 | 2023-01-25T17:52:58Z | 2023-01-25T17:52:58Z | MEMBER |
We still don't have a lazy / out-of-core index unfortunately. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
1398534837 | https://github.com/pydata/xarray/pull/4659#issuecomment-1398534837 | https://api.github.com/repos/pydata/xarray/issues/4659 | IC_kwDOAMm_X85TW_K1 | jsignell 4806877 | 2023-01-20T15:11:13Z | 2023-01-20T15:11:13Z | CONTRIBUTOR | My understanding is that indexes have come a long way since this PR was last touched. Maybe now is the right time to rewrite this in a way that is more performant? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
1383512313 | https://github.com/pydata/xarray/pull/4659#issuecomment-1383512313 | https://api.github.com/repos/pydata/xarray/issues/4659 | IC_kwDOAMm_X85Sdrj5 | sxwebster 57381773 | 2023-01-16T05:26:03Z | 2023-01-16T05:26:57Z | NONE | I'm quite supportive of this effort as it would make raster calculation operations a whole lot more straight forward, not to mention doing things like joins of the dataframe which don't necessarily need to exist with the xarray object if selected columns are pushed back to rioxarray as bands. . |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
824167488 | https://github.com/pydata/xarray/pull/4659#issuecomment-824167488 | https://api.github.com/repos/pydata/xarray/issues/4659 | MDEyOklzc3VlQ29tbWVudDgyNDE2NzQ4OA== | shoyer 1217238 | 2021-04-21T15:47:56Z | 2021-04-21T15:47:56Z | MEMBER | My main concern is really just if anybody will find this function useful in its current state, with all of the serious performance limitations. I expect conversion from dask data frames to xarray will be much more useful when we support out of core indexing, or can unstuck multiple columns into multidimensional arrays. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
818269258 | https://github.com/pydata/xarray/pull/4659#issuecomment-818269258 | https://api.github.com/repos/pydata/xarray/issues/4659 | MDEyOklzc3VlQ29tbWVudDgxODI2OTI1OA== | keewis 14808389 | 2021-04-12T21:56:59Z | 2021-04-12T21:56:59Z | MEMBER | this should be ready for review |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
818182598 | https://github.com/pydata/xarray/pull/4659#issuecomment-818182598 | https://api.github.com/repos/pydata/xarray/issues/4659 | MDEyOklzc3VlQ29tbWVudDgxODE4MjU5OA== | keewis 14808389 | 2021-04-12T20:30:23Z | 2021-04-12T21:56:33Z | MEMBER | @AyrtonB, I took the liberty of pushing the changes I had in mind to your branch, using a adapted version of your docstring. The only thing that should be missing is to figure out if it's possible to reduce the number of computes to |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
740012453 | https://github.com/pydata/xarray/pull/4659#issuecomment-740012453 | https://api.github.com/repos/pydata/xarray/issues/4659 | MDEyOklzc3VlQ29tbWVudDc0MDAxMjQ1Mw== | pep8speaks 24736507 | 2020-12-07T16:05:03Z | 2021-04-12T21:29:30Z | NONE | Hello @AyrtonB! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found: There are currently no PEP 8 issues detected in this Pull Request. Cheers! :beers: Comment last updated at 2021-04-12 21:29:29 UTC |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
811388476 | https://github.com/pydata/xarray/pull/4659#issuecomment-811388476 | https://api.github.com/repos/pydata/xarray/issues/4659 | MDEyOklzc3VlQ29tbWVudDgxMTM4ODQ3Ng== | keewis 14808389 | 2021-03-31T19:40:51Z | 2021-03-31T19:40:51Z | MEMBER | @pydata/xarray, any opinion on the API design? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
798989229 | https://github.com/pydata/xarray/pull/4659#issuecomment-798989229 | https://api.github.com/repos/pydata/xarray/issues/4659 | MDEyOklzc3VlQ29tbWVudDc5ODk4OTIyOQ== | keewis 14808389 | 2021-03-14T22:10:00Z | 2021-03-14T22:10:00Z | MEMBER | I don't think there is a lot left to decide: we want to keep the conversion logic in The only thing I think is left to figure out is how to best compute the chunk sizes with as few computations of cc @dcherian |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
797555413 | https://github.com/pydata/xarray/pull/4659#issuecomment-797555413 | https://api.github.com/repos/pydata/xarray/issues/4659 | MDEyOklzc3VlQ29tbWVudDc5NzU1NTQxMw== | AyrtonB 29051639 | 2021-03-12T15:17:16Z | 2021-03-12T15:17:16Z | CONTRIBUTOR | From what I can gather there are more serious back-end considerations needed before this can be progressed. Personally, I've been monkey-patching this code in which has solved my particular use-case, hopefully it's helpful for yours. ```python import xarray as xr import pandas as pd import numpy as np import dask.dataframe as dd from dask.distributed import Client import numcodecs from types import ModuleType from datetime import timedelta from dask.dataframe.core import DataFrame as ddf from numbers import Number from typing import Any, Union, Sequence, Tuple, Mapping, Hashable, Dict, Optional, Set from xarray.core import dtypes, groupby, rolling, resample, weighted, utils# from xarray.core.accessor_dt import CombinedDatetimelikeAccessor from xarray.core.variable import Variable, IndexVariable from xarray.core.merge import PANDAS_TYPES from xarray.core.variable import NON_NUMPY_SUPPORTED_ARRAY_TYPES, IS_NEP18_ACTIVE, _maybe_wrap_data, _possibly_convert_objects from xarray.core.dataarray import _check_data_shape, _infer_coords_and_dims, _extract_indexes_from_coords from xarray.core.common import ImplementsDatasetReduce, DataWithCoords def as_compatible_data(data, fastpath=False): """Prepare and wrap data to put in a Variable. - If data does not have the necessary attributes, convert it to ndarray. - If data has dtype=datetime64, ensure that it has ns precision. If it's a pandas.Timestamp, convert it to datetime64. - If data is already a pandas or xarray object (other than an Index), just use the values. Finally, wrap it up with an adapter if necessary. """ if fastpath and getattr(data, "ndim", 0) > 0: # can't use fastpath (yet) for scalars return _maybe_wrap_data(data)
xr.core.variable.as_compatible_data = as_compatible_data class DataArray(xr.core.dataarray.DataArray):
xr.core.dataarray.DataArray = DataArray xr.DataArray = DataArray def _maybe_chunk( name, var, chunks=None, token=None, lock=None, name_prefix="xarray-", overwrite_encoded_chunks=False, ): from dask.base import tokenize
class Dataset(xr.Dataset): """A multi-dimensional, in memory, array database.
xr.core.dataarray.Dataset = Dataset xr.Dataset = Dataset ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
797547241 | https://github.com/pydata/xarray/pull/4659#issuecomment-797547241 | https://api.github.com/repos/pydata/xarray/issues/4659 | MDEyOklzc3VlQ29tbWVudDc5NzU0NzI0MQ== | martindurant 6042212 | 2021-03-12T15:04:34Z | 2021-03-12T15:04:34Z | CONTRIBUTOR | Ping, can I please ask what the current status is here? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
740041249 | https://github.com/pydata/xarray/pull/4659#issuecomment-740041249 | https://api.github.com/repos/pydata/xarray/issues/4659 | MDEyOklzc3VlQ29tbWVudDc0MDA0MTI0OQ== | keewis 14808389 | 2020-12-07T16:50:03Z | 2020-12-07T16:51:34Z | MEMBER | there's a few things to fix in
except ImportError: dask_dataframe_type = () ``` |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
740032261 | https://github.com/pydata/xarray/pull/4659#issuecomment-740032261 | https://api.github.com/repos/pydata/xarray/issues/4659 | MDEyOklzc3VlQ29tbWVudDc0MDAzMjI2MQ== | AyrtonB 29051639 | 2020-12-07T16:36:36Z | 2020-12-07T16:36:36Z | CONTRIBUTOR | I've added |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
740020080 | https://github.com/pydata/xarray/pull/4659#issuecomment-740020080 | https://api.github.com/repos/pydata/xarray/issues/4659 | MDEyOklzc3VlQ29tbWVudDc0MDAyMDA4MA== | AyrtonB 29051639 | 2020-12-07T16:17:25Z | 2020-12-07T16:17:25Z | CONTRIBUTOR | That makes sense, thanks @keewis |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
740006353 | https://github.com/pydata/xarray/pull/4659#issuecomment-740006353 | https://api.github.com/repos/pydata/xarray/issues/4659 | MDEyOklzc3VlQ29tbWVudDc0MDAwNjM1Mw== | keewis 14808389 | 2020-12-07T15:55:12Z | 2020-12-07T15:55:12Z | MEMBER | sorry, it is indeed called |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
740002632 | https://github.com/pydata/xarray/pull/4659#issuecomment-740002632 | https://api.github.com/repos/pydata/xarray/issues/4659 | MDEyOklzc3VlQ29tbWVudDc0MDAwMjYzMg== | AyrtonB 29051639 | 2020-12-07T15:49:00Z | 2020-12-07T15:49:00Z | CONTRIBUTOR | Thanks, yes I need to load the library for type-hinting and type checks. When you say |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
739994871 | https://github.com/pydata/xarray/pull/4659#issuecomment-739994871 | https://api.github.com/repos/pydata/xarray/issues/4659 | MDEyOklzc3VlQ29tbWVudDczOTk5NDg3MQ== | keewis 14808389 | 2020-12-07T15:36:57Z | 2020-12-07T15:42:22Z | MEMBER | you can just decorate tests that require Edit: actually, you seem to import |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 | |
739988806 | https://github.com/pydata/xarray/pull/4659#issuecomment-739988806 | https://api.github.com/repos/pydata/xarray/issues/4659 | MDEyOklzc3VlQ29tbWVudDczOTk4ODgwNg== | AyrtonB 29051639 | 2020-12-07T15:27:10Z | 2020-12-07T15:27:10Z | CONTRIBUTOR | During testing I'm currently encountering the issue: How should testing of dask DataArrays be approached? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
xr.DataArray.from_dask_dataframe feature 758606082 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 8