issue_comments
77 rows where user = 291576 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: issue_url, reactions, created_at (date), updated_at (date)
user 1
- WeatherGod · 77 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
738189796 | https://github.com/pydata/xarray/issues/2004#issuecomment-738189796 | https://api.github.com/repos/pydata/xarray/issues/2004 | MDEyOklzc3VlQ29tbWVudDczODE4OTc5Ng== | WeatherGod 291576 | 2020-12-03T18:15:35Z | 2020-12-03T18:15:35Z | CONTRIBUTOR | I think so, at least in terms of my original problem. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slicing DataArray can take longer than not slicing 307318224 | |
642253287 | https://github.com/pydata/xarray/issues/4142#issuecomment-642253287 | https://api.github.com/repos/pydata/xarray/issues/4142 | MDEyOklzc3VlQ29tbWVudDY0MjI1MzI4Nw== | WeatherGod 291576 | 2020-06-10T20:55:32Z | 2020-06-10T20:55:32Z | CONTRIBUTOR | So, one important difference I see off the bat is that zarr already had a DataStore implementation, while rasterio does not. I take it that implementing one would be the preferred approach? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Should we make "rasterio" an engine option? 636493109 | |
451626366 | https://github.com/pydata/xarray/pull/2648#issuecomment-451626366 | https://api.github.com/repos/pydata/xarray/issues/2648 | MDEyOklzc3VlQ29tbWVudDQ1MTYyNjM2Ng== | WeatherGod 291576 | 2019-01-05T04:18:50Z | 2019-01-05T04:18:50Z | CONTRIBUTOR | I completely forgotten about that little quirk of cpython. I try to ignore implementation details like that. Heck, I still don't fully trust dictionaries to be ordered! I removed the WIP. We can deal with the concat dim default object separately, including turning it into a ReprObject (not exactly sure what the advantage of it is over just using the string, but, meh). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Change an `==` to an `is`. Fix tests so that this won't happen again. 396008054 | |
451583970 | https://github.com/pydata/xarray/pull/2648#issuecomment-451583970 | https://api.github.com/repos/pydata/xarray/issues/2648 | MDEyOklzc3VlQ29tbWVudDQ1MTU4Mzk3MA== | WeatherGod 291576 | 2019-01-04T22:12:44Z | 2019-01-04T22:12:44Z | CONTRIBUTOR | Is the following statement True or False: "The user should be allowed to explicitly declare that they want the concatenation dimension to be inferred by passing a keyword argument". If this is True, then you need to test equivalence. If it is False, then there is nothing more I need to do for the PR, as changing this to use a ReprObject is orthogonal to these changes. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Change an `==` to an `is`. Fix tests so that this won't happen again. 396008054 | |
451581103 | https://github.com/pydata/xarray/pull/2648#issuecomment-451581103 | https://api.github.com/repos/pydata/xarray/issues/2648 | MDEyOklzc3VlQ29tbWVudDQ1MTU4MTEwMw== | WeatherGod 291576 | 2019-01-04T22:00:10Z | 2019-01-04T22:00:10Z | CONTRIBUTOR | ok, so we use the ReprObject for the default, and then test if |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Change an `==` to an `is`. Fix tests so that this won't happen again. 396008054 | |
451504997 | https://github.com/pydata/xarray/issues/2647#issuecomment-451504997 | https://api.github.com/repos/pydata/xarray/issues/2647 | MDEyOklzc3VlQ29tbWVudDQ1MTUwNDk5Nw== | WeatherGod 291576 | 2019-01-04T17:06:50Z | 2019-01-04T17:06:50Z | CONTRIBUTOR | scratch that... the test was an |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
getting a "truth value of an array" error when supplying my own `concat_dim`. 395994055 | |
451504462 | https://github.com/pydata/xarray/issues/2647#issuecomment-451504462 | https://api.github.com/repos/pydata/xarray/issues/2647 | MDEyOklzc3VlQ29tbWVudDQ1MTUwNDQ2Mg== | WeatherGod 291576 | 2019-01-04T17:05:00Z | 2019-01-04T17:05:00Z | CONTRIBUTOR | actually, we could simplify the conditional to be just |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
getting a "truth value of an array" error when supplying my own `concat_dim`. 395994055 | |
451504141 | https://github.com/pydata/xarray/issues/2647#issuecomment-451504141 | https://api.github.com/repos/pydata/xarray/issues/2647 | MDEyOklzc3VlQ29tbWVudDQ1MTUwNDE0MQ== | WeatherGod 291576 | 2019-01-04T17:03:54Z | 2019-01-04T17:03:54Z | CONTRIBUTOR | ah! that's why it snuck through! I have been raking my brain on this for the past hour! shall I go ahead and make a PR? |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
getting a "truth value of an array" error when supplying my own `concat_dim`. 395994055 | |
451501740 | https://github.com/pydata/xarray/issues/2647#issuecomment-451501740 | https://api.github.com/repos/pydata/xarray/issues/2647 | MDEyOklzc3VlQ29tbWVudDQ1MTUwMTc0MA== | WeatherGod 291576 | 2019-01-04T16:55:40Z | 2019-01-04T16:55:40Z | CONTRIBUTOR | To be more explicit, the issue is that |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
getting a "truth value of an array" error when supplying my own `concat_dim`. 395994055 | |
425224969 | https://github.com/pydata/xarray/issues/2227#issuecomment-425224969 | https://api.github.com/repos/pydata/xarray/issues/2227 | MDEyOklzc3VlQ29tbWVudDQyNTIyNDk2OQ== | WeatherGod 291576 | 2018-09-27T20:05:05Z | 2018-09-27T20:05:05Z | CONTRIBUTOR | It would be ten files opened via xr.open_mfdataset() concatenated across a time dimension, each one looking like: ``` netcdf convect_gust_20180301_0000 { dimensions: latitude = 3502 ; longitude = 7002 ; variables: double latitude(latitude) ; latitude:_FillValue = NaN ; latitude:_Storage = "contiguous" ; latitude:_Endianness = "little" ; double longitude(longitude) ; longitude:_FillValue = NaN ; longitude:_Storage = "contiguous" ; longitude:_Endianness = "little" ; float gust(latitude, longitude) ; gust:_FillValue = NaNf ; gust:units = "m/s" ; gust:description = "gust winds" ; gust:_Storage = "chunked" ; gust:_ChunkSizes = 701, 1401 ; gust:_DeflateLevel = 8 ; gust:_Shuffle = "true" ; gust:_Endianness = "little" ; // global attributes: :start_date = "03/01/2018 00:00" ; :end_date = "03/01/2018 01:00" ; :interval = "half-open" ; :init_date = "02/28/2018 22:00" ; :history = "Created 2018-09-12 15:53:44.468144" ; :description = "Convective Downscaling, format V2.0" ; :_NCProperties = "version=1|netcdflibversion=4.6.1|hdf5libversion=1.10.1" ; :_SuperblockVersion = 0 ; :_IsNetcdf4 = 1 ; :_Format = "netCDF-4" ; ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slow performance of isel 331668890 | |
424795330 | https://github.com/pydata/xarray/issues/2227#issuecomment-424795330 | https://api.github.com/repos/pydata/xarray/issues/2227 | MDEyOklzc3VlQ29tbWVudDQyNDc5NTMzMA== | WeatherGod 291576 | 2018-09-26T17:06:44Z | 2018-09-26T17:06:44Z | CONTRIBUTOR | No, it does not make a difference. The example above peaks at around 5GB of memory (a bit much, but manageable). And it peaks similarly if we chunk it like you suggested. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slow performance of isel 331668890 | |
424485235 | https://github.com/pydata/xarray/issues/2227#issuecomment-424485235 | https://api.github.com/repos/pydata/xarray/issues/2227 | MDEyOklzc3VlQ29tbWVudDQyNDQ4NTIzNQ== | WeatherGod 291576 | 2018-09-25T20:14:02Z | 2018-09-25T20:14:02Z | CONTRIBUTOR | Yeah, it looks like if |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slow performance of isel 331668890 | |
424479421 | https://github.com/pydata/xarray/issues/2227#issuecomment-424479421 | https://api.github.com/repos/pydata/xarray/issues/2227 | MDEyOklzc3VlQ29tbWVudDQyNDQ3OTQyMQ== | WeatherGod 291576 | 2018-09-25T19:54:59Z | 2018-09-25T19:54:59Z | CONTRIBUTOR | Just for posterity, though, here is my simplified (working!) example: ``` import numpy as np import xarray as xr da = xr.DataArray(np.random.randn(10, 3000, 7000), dims=('time', 'latitude', 'longitude')) window = da.rolling(time=2).construct('win') indexes = window.argmax(dim='win') result = window.isel(win=indexes) ``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slow performance of isel 331668890 | |
424477465 | https://github.com/pydata/xarray/issues/2227#issuecomment-424477465 | https://api.github.com/repos/pydata/xarray/issues/2227 | MDEyOklzc3VlQ29tbWVudDQyNDQ3NzQ2NQ== | WeatherGod 291576 | 2018-09-25T19:48:20Z | 2018-09-25T19:48:20Z | CONTRIBUTOR | Huh, strange... I just tried a simplified version of what I was doing (particularly, no dask arrays), and everything worked fine. I'll have to investigate further. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slow performance of isel 331668890 | |
424470752 | https://github.com/pydata/xarray/issues/2227#issuecomment-424470752 | https://api.github.com/repos/pydata/xarray/issues/2227 | MDEyOklzc3VlQ29tbWVudDQyNDQ3MDc1Mg== | WeatherGod 291576 | 2018-09-25T19:27:28Z | 2018-09-25T19:27:28Z | CONTRIBUTOR | I am looking into a similar performance issue with isel, but it seems that the issue is that it is creating arrays that are much bigger than needed. For my multidimensional case (time/x/y/window), what should end up only taking a few hundred MB is spiking up to 10's of GB of used RAM. Don't know if this might be a possible source of performance issues. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slow performance of isel 331668890 | |
407547050 | https://github.com/pydata/xarray/issues/2217#issuecomment-407547050 | https://api.github.com/repos/pydata/xarray/issues/2217 | MDEyOklzc3VlQ29tbWVudDQwNzU0NzA1MA== | WeatherGod 291576 | 2018-07-24T20:48:53Z | 2018-07-24T20:48:53Z | CONTRIBUTOR | I have created a PR for my work-in-progress: pandas-dev/pandas#22043 |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
tolerance for alignment 329575874 | |
400043753 | https://github.com/pydata/xarray/issues/2217#issuecomment-400043753 | https://api.github.com/repos/pydata/xarray/issues/2217 | MDEyOklzc3VlQ29tbWVudDQwMDA0Mzc1Mw== | WeatherGod 291576 | 2018-06-25T18:07:49Z | 2018-06-25T18:07:49Z | CONTRIBUTOR | Do we want to dive straight to that? Or, would it make more sense to first submit some PRs piping the support for a tolerance kwarg through more of the API? Or perhaps we should propose that a "tolerance" attribute should be an optional attribute that methods like In addition, we are likely going to have to implement a decent chunk of code ourselves for compatibility's sake, I think. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
tolerance for alignment 329575874 | |
399612490 | https://github.com/pydata/xarray/issues/2217#issuecomment-399612490 | https://api.github.com/repos/pydata/xarray/issues/2217 | MDEyOklzc3VlQ29tbWVudDM5OTYxMjQ5MA== | WeatherGod 291576 | 2018-06-22T23:56:41Z | 2018-06-22T23:56:41Z | CONTRIBUTOR | I am not concerned about the non-commutativeness of the indexer itself. There is no way around that. At some point, you have to choose values, whether it is done by an indexer or done by some particular set operation. As for the different sizes, that happens when the tolerance is greater than half the smallest delta. I figure a final implementation would enforce such a constraint on the tolerance. On Fri, Jun 22, 2018 at 5:56 PM, Stephan Hoyer notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
tolerance for alignment 329575874 | |
399584169 | https://github.com/pydata/xarray/issues/2217#issuecomment-399584169 | https://api.github.com/repos/pydata/xarray/issues/2217 | MDEyOklzc3VlQ29tbWVudDM5OTU4NDE2OQ== | WeatherGod 291576 | 2018-06-22T21:15:06Z | 2018-06-22T21:15:06Z | CONTRIBUTOR | Actually, I disagree. Pandas's set operations methods are mostly index-based. For union and intersection, they have an optimization that dives down into some c-code when the Indexes are monotonic, but everywhere else, it all works off of results from ``` python from future import print_function import warnings from pandas import Index import numpy as np from pandas.indexes.base import is_object_dtype, algos, is_dtype_equal from pandas.indexes.base import _ensure_index, _concat, _values_from_object, _unsortable_types from pandas.indexes.numeric import Float64Index def _choose_tolerance(this, that, tolerance): if tolerance is None: tolerance = max(this.tolerance, getattr(that, 'tolerance', 0.0)) return tolerance class ImpreciseIndex(Float64Index): def astype(self, dtype, copy=True): return ImpreciseIndex(self.values.astype(dtype=dtype, copy=copy), name=self.name, dtype=dtype)
if name == 'main': a = ImpreciseIndex([0.1, 0.2, 0.3, 0.4]) a.tolerance = 0.01 b = ImpreciseIndex([0.301, 0.401, 0.501, 0.601]) b.tolerance = 0.025 print(a, b) print("a | b :", a.union(b)) print("a & b :", a.intersection(b)) print("a.get_indexer(b):", a.get_indexer(b)) print("b.get_indexer(a):", b.get_indexer(a)) ``` Run this and get the following results:
This is mostly lifted from the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
tolerance for alignment 329575874 | |
399522595 | https://github.com/pydata/xarray/issues/2217#issuecomment-399522595 | https://api.github.com/repos/pydata/xarray/issues/2217 | MDEyOklzc3VlQ29tbWVudDM5OTUyMjU5NQ== | WeatherGod 291576 | 2018-06-22T17:42:29Z | 2018-06-22T17:42:29Z | CONTRIBUTOR | Ok, I see how you implemented it for pandas's reindex. You essentially inserted an inexact filter within For xarray, though, I think we can work around backwards compatibility by having Dataset hold specialized subclasses of Index for floating-point data types that would have the needed changes to the Index class. We can have this specialized class have some default tolerance (say 100*finfo(dtype).resolution?), and it would have its methods use the stored tolerance by default, so it should be completely transparent to the end-user (hopefully). This way, |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
tolerance for alignment 329575874 | |
399286310 | https://github.com/pydata/xarray/issues/2217#issuecomment-399286310 | https://api.github.com/repos/pydata/xarray/issues/2217 | MDEyOklzc3VlQ29tbWVudDM5OTI4NjMxMA== | WeatherGod 291576 | 2018-06-22T00:45:19Z | 2018-06-22T00:45:19Z | CONTRIBUTOR | @shoyer, I am thinking your original intuition was right about needing to introduce improve the Index classes to perhaps work with an optional epsilon argument to its constructor. How receptive do you think pandas would be to that? And even if they would accept such a feature, we probably would need to implement it a bit ourselves in situations where older pandas versions are used. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
tolerance for alignment 329575874 | |
399285369 | https://github.com/pydata/xarray/issues/2217#issuecomment-399285369 | https://api.github.com/repos/pydata/xarray/issues/2217 | MDEyOklzc3VlQ29tbWVudDM5OTI4NTM2OQ== | WeatherGod 291576 | 2018-06-22T00:38:34Z | 2018-06-22T00:38:34Z | CONTRIBUTOR | Well, I need this to work for join='outer', so, it is gonna happen one way or another... One concept I was toying with today was a distinction between aligning coords (which is what it does now) and aligning bounding boxes. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
tolerance for alignment 329575874 | |
399254317 | https://github.com/pydata/xarray/issues/2217#issuecomment-399254317 | https://api.github.com/repos/pydata/xarray/issues/2217 | MDEyOklzc3VlQ29tbWVudDM5OTI1NDMxNw== | WeatherGod 291576 | 2018-06-21T21:48:28Z | 2018-06-21T21:48:28Z | CONTRIBUTOR | To be clear, my use-case would not be solved by |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
tolerance for alignment 329575874 | |
399253493 | https://github.com/pydata/xarray/issues/2217#issuecomment-399253493 | https://api.github.com/repos/pydata/xarray/issues/2217 | MDEyOklzc3VlQ29tbWVudDM5OTI1MzQ5Mw== | WeatherGod 291576 | 2018-06-21T21:44:58Z | 2018-06-21T21:44:58Z | CONTRIBUTOR | I was just pointed to this issue yesterday, and I have an immediate need for this feature in xarray for a work project. I'll take responsibility to implement this feature tomorrow. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
tolerance for alignment 329575874 | |
380241636 | https://github.com/pydata/xarray/pull/2048#issuecomment-380241636 | https://api.github.com/repos/pydata/xarray/issues/2048 | MDEyOklzc3VlQ29tbWVudDM4MDI0MTYzNg== | WeatherGod 291576 | 2018-04-10T20:48:25Z | 2018-04-10T20:48:25Z | CONTRIBUTOR | What's new entry added. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
concat_dim for auto_combine for a single object is now respected 312998259 | |
380203653 | https://github.com/pydata/xarray/pull/2048#issuecomment-380203653 | https://api.github.com/repos/pydata/xarray/issues/2048 | MDEyOklzc3VlQ29tbWVudDM4MDIwMzY1Mw== | WeatherGod 291576 | 2018-04-10T18:34:32Z | 2018-04-10T18:34:32Z | CONTRIBUTOR | Travis failures seem to be unrelated? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
concat_dim for auto_combine for a single object is now respected 312998259 | |
380137124 | https://github.com/pydata/xarray/issues/1988#issuecomment-380137124 | https://api.github.com/repos/pydata/xarray/issues/1988 | MDEyOklzc3VlQ29tbWVudDM4MDEzNzEyNA== | WeatherGod 291576 | 2018-04-10T15:12:05Z | 2018-04-10T15:12:05Z | CONTRIBUTOR | Yup... looks like that did the trick (for auto_combine and open_mfdataset). I even have a simple test to demonstrate it. PR coming shortly. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset() on a single file drops the concat_dim 305327479 | |
379939574 | https://github.com/pydata/xarray/issues/1988#issuecomment-379939574 | https://api.github.com/repos/pydata/xarray/issues/1988 | MDEyOklzc3VlQ29tbWVudDM3OTkzOTU3NA== | WeatherGod 291576 | 2018-04-10T00:55:48Z | 2018-04-10T00:55:48Z | CONTRIBUTOR | I'll give it a go tomorrow. My work has gotten to this point now, and I have some unit tests that happen to exercise this edge case. On a somewhat related note, would a Any interest? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset() on a single file drops the concat_dim 305327479 | |
379901414 | https://github.com/pydata/xarray/issues/1988#issuecomment-379901414 | https://api.github.com/repos/pydata/xarray/issues/1988 | MDEyOklzc3VlQ29tbWVudDM3OTkwMTQxNA== | WeatherGod 291576 | 2018-04-09T21:35:11Z | 2018-04-09T21:35:11Z | CONTRIBUTOR | Could the fix be as simple as |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
open_mfdataset() on a single file drops the concat_dim 305327479 | |
375056363 | https://github.com/pydata/xarray/issues/2004#issuecomment-375056363 | https://api.github.com/repos/pydata/xarray/issues/2004 | MDEyOklzc3VlQ29tbWVudDM3NTA1NjM2Mw== | WeatherGod 291576 | 2018-03-21T18:50:58Z | 2018-03-21T18:50:58Z | CONTRIBUTOR | Ah, nevermind, I see that our examples only had one greater-than-one stride |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slicing DataArray can take longer than not slicing 307318224 | |
375056077 | https://github.com/pydata/xarray/issues/2004#issuecomment-375056077 | https://api.github.com/repos/pydata/xarray/issues/2004 | MDEyOklzc3VlQ29tbWVudDM3NTA1NjA3Nw== | WeatherGod 291576 | 2018-03-21T18:50:01Z | 2018-03-21T18:50:01Z | CONTRIBUTOR | Dunno. I can't seem to get that engine working on my system. Reading through that thread, I wonder if the optimization they added only applies if there is only one stride greater than one? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slicing DataArray can take longer than not slicing 307318224 | |
375036951 | https://github.com/pydata/xarray/issues/2004#issuecomment-375036951 | https://api.github.com/repos/pydata/xarray/issues/2004 | MDEyOklzc3VlQ29tbWVudDM3NTAzNjk1MQ== | WeatherGod 291576 | 2018-03-21T17:51:54Z | 2018-03-21T17:51:54Z | CONTRIBUTOR | This might be relevant: https://github.com/Unidata/netcdf4-python/issues/680 Still reading through the thread. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slicing DataArray can take longer than not slicing 307318224 | |
375034973 | https://github.com/pydata/xarray/issues/2004#issuecomment-375034973 | https://api.github.com/repos/pydata/xarray/issues/2004 | MDEyOklzc3VlQ29tbWVudDM3NTAzNDk3Mw== | WeatherGod 291576 | 2018-03-21T17:46:09Z | 2018-03-21T17:46:09Z | CONTRIBUTOR | my bet is probably netCDF4-python. Don't want to write up the C code though to confirm it. Sigh... this isn't going to be a fun one to track down. Shall I open a bug report over there? |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slicing DataArray can take longer than not slicing 307318224 | |
375014480 | https://github.com/pydata/xarray/issues/2004#issuecomment-375014480 | https://api.github.com/repos/pydata/xarray/issues/2004 | MDEyOklzc3VlQ29tbWVudDM3NTAxNDQ4MA== | WeatherGod 291576 | 2018-03-21T16:50:59Z | 2018-03-21T16:56:13Z | CONTRIBUTOR | Yeah, good example. Eliminates a lot of possible variables such as problems with netcdf4 compression and such. Probably should see if it happens in v0.10.0 to see if the changes to the indexing system caused this. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Slicing DataArray can take longer than not slicing 307318224 | |
373840044 | https://github.com/pydata/xarray/issues/1997#issuecomment-373840044 | https://api.github.com/repos/pydata/xarray/issues/1997 | MDEyOklzc3VlQ29tbWVudDM3Mzg0MDA0NA== | WeatherGod 291576 | 2018-03-16T20:45:39Z | 2018-03-16T20:45:39Z | CONTRIBUTOR | MaskedArrays had a similar problem, IIRC, because it was blindly copying the NDArray docstrings. Not going to be easy to do, though. "we don't support |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
can't do in-place clip() with DataArrays. 306067267 | |
370986433 | https://github.com/pydata/xarray/pull/1899#issuecomment-370986433 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM3MDk4NjQzMw== | WeatherGod 291576 | 2018-03-07T01:08:36Z | 2018-03-07T01:08:36Z | CONTRIBUTOR | :tada: |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
367077311 | https://github.com/pydata/xarray/pull/1899#issuecomment-367077311 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NzA3NzMxMQ== | WeatherGod 291576 | 2018-02-20T18:43:56Z | 2018-02-20T18:43:56Z | CONTRIBUTOR | I did some more investigation into the memory usage problem I was having. I had assumed that the vectorized indexed result of a lazily indexed data array would be an in-memory array. So, when I then started to use the result, it was then doing a read of all the data at once, resulting in a near-complete load of the data into memory. I have adjusted my code to chunk out the indexing in order to keep the memory usage under control at reasonable performance penalty. I haven't looked into trying to identify the ideal chunking scheme to follow for an arbitrary dataarray and indexing. Perhaps we can make that a task for another day. At this point, I am satisfied with the features (negative step-sizes aside, of course). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
366379465 | https://github.com/pydata/xarray/pull/1899#issuecomment-366379465 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NjM3OTQ2NQ== | WeatherGod 291576 | 2018-02-16T22:40:06Z | 2018-02-16T22:40:06Z | CONTRIBUTOR | Ah-hah! Ok, so, the problem isn't some weird difference between the two examples I gave. The issue is that calling |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
366376400 | https://github.com/pydata/xarray/pull/1899#issuecomment-366376400 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NjM3NjQwMA== | WeatherGod 291576 | 2018-02-16T22:25:59Z | 2018-02-16T22:25:59Z | CONTRIBUTOR | huh... now I am not so sure about that... must be something else triggering the load. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
366374917 | https://github.com/pydata/xarray/pull/1899#issuecomment-366374917 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NjM3NDkxNw== | WeatherGod 291576 | 2018-02-16T22:19:08Z | 2018-02-16T22:19:08Z | CONTRIBUTOR | also, at this point, I don't know if this is limited to the netcdf4 backend, as this type of indexing was only done on a variable I have in a netcdf file. I don't have 4-D variables in other file types. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
366374041 | https://github.com/pydata/xarray/pull/1899#issuecomment-366374041 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NjM3NDA0MQ== | WeatherGod 291576 | 2018-02-16T22:14:49Z | 2018-02-16T22:14:49Z | CONTRIBUTOR |
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
366373479 | https://github.com/pydata/xarray/pull/1899#issuecomment-366373479 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NjM3MzQ3OQ== | WeatherGod 291576 | 2018-02-16T22:12:18Z | 2018-02-16T22:12:18Z | CONTRIBUTOR | Ah, not a change in behavior, but a possible bug exposed by a tiny change on my part. So, I have a 4D data array, So, somehow, the indexing system is effectively treating these two things as different. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
366363419 | https://github.com/pydata/xarray/pull/1899#issuecomment-366363419 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NjM2MzQxOQ== | WeatherGod 291576 | 2018-02-16T21:28:09Z | 2018-02-16T21:28:09Z | CONTRIBUTOR | correction... the problem isn't with pynio... it is in the netcdf4 backend |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
366360382 | https://github.com/pydata/xarray/pull/1899#issuecomment-366360382 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NjM2MDM4Mg== | WeatherGod 291576 | 2018-02-16T21:15:17Z | 2018-02-16T21:15:17Z | CONTRIBUTOR | Something changed. Now the indexing for pynio is forcing a full loading of the data. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
366059694 | https://github.com/pydata/xarray/pull/1899#issuecomment-366059694 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NjA1OTY5NA== | WeatherGod 291576 | 2018-02-15T20:59:20Z | 2018-02-15T20:59:20Z | CONTRIBUTOR | I can confirm that with the latest changes, the pynio tests now pass locally for me. Now, as to whether or not the tests in there are actually exercising anything useful is a different question. |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
365734783 | https://github.com/pydata/xarray/issues/1910#issuecomment-365734783 | https://api.github.com/repos/pydata/xarray/issues/1910 | MDEyOklzc3VlQ29tbWVudDM2NTczNDc4Mw== | WeatherGod 291576 | 2018-02-14T20:27:38Z | 2018-02-14T20:27:38Z | CONTRIBUTOR | Looking through the travis logs, I do see that pynio is getting installed. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Pynio tests are being skipped on TravisCI 297227247 | |
365734285 | https://github.com/pydata/xarray/issues/1910#issuecomment-365734285 | https://api.github.com/repos/pydata/xarray/issues/1910 | MDEyOklzc3VlQ29tbWVudDM2NTczNDI4NQ== | WeatherGod 291576 | 2018-02-14T20:25:52Z | 2018-02-14T20:25:52Z | CONTRIBUTOR | Zarr tests and pydap tests are also being skipped |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Pynio tests are being skipped on TravisCI 297227247 | |
365729433 | https://github.com/pydata/xarray/pull/1899#issuecomment-365729433 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NTcyOTQzMw== | WeatherGod 291576 | 2018-02-14T20:07:55Z | 2018-02-14T20:07:55Z | CONTRIBUTOR | I am working on re-activating those tests. I think PyNio is now available for python3, too. On Wed, Feb 14, 2018 at 2:59 PM, Joe Hamman notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
365722413 | https://github.com/pydata/xarray/pull/1899#issuecomment-365722413 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NTcyMjQxMw== | WeatherGod 291576 | 2018-02-14T19:43:07Z | 2018-02-14T19:43:07Z | CONTRIBUTOR | It looks like the pynio backend isn't regularly tested, as several of them currently fail when I run the tests locally. Some of them are failing because they are asserting NotImplementedErrors that are now implemented. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
365708385 | https://github.com/pydata/xarray/pull/1899#issuecomment-365708385 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NTcwODM4NQ== | WeatherGod 291576 | 2018-02-14T18:55:43Z | 2018-02-14T18:55:43Z | CONTRIBUTOR | Just did some more debugging, putting in some debug statements within
``` And here is the test script (data not included):
And here is the relevant output:
So, the |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
365692868 | https://github.com/pydata/xarray/pull/1899#issuecomment-365692868 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NTY5Mjg2OA== | WeatherGod 291576 | 2018-02-14T18:02:17Z | 2018-02-14T18:06:24Z | CONTRIBUTOR | Ah, interesting... so, this dataset was created by doing an isel() on the original: ```
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
365689883 | https://github.com/pydata/xarray/pull/1899#issuecomment-365689883 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NTY4OTg4Mw== | WeatherGod 291576 | 2018-02-14T17:52:24Z | 2018-02-14T17:52:24Z | CONTRIBUTOR | I can also confirm that the shape comes out correctly using master, so this is definitely isolated to this PR. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
365689003 | https://github.com/pydata/xarray/pull/1899#issuecomment-365689003 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NTY4OTAwMw== | WeatherGod 291576 | 2018-02-14T17:49:20Z | 2018-02-14T17:49:20Z | CONTRIBUTOR | Hmm, came across a bug with the pynio backend. Working on making a reproducible example, but just for your own inspection, here is some logging output:
If I revert back to v0.10.0, then the shape is (1059, 1799}, just as expected. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
365657502 | https://github.com/pydata/xarray/pull/1899#issuecomment-365657502 | https://api.github.com/repos/pydata/xarray/issues/1899 | MDEyOklzc3VlQ29tbWVudDM2NTY1NzUwMg== | WeatherGod 291576 | 2018-02-14T16:13:16Z | 2018-02-14T16:13:16Z | CONTRIBUTOR | Oh, wow... this worked like a charm for the netcdf4 backend! I have a ~13GB (uncompressed) 4-D netcdf4 variable that was giving me trouble for slicing a 2D surface out of. Here is a snippet where I am grabbing data at random indices in the last dimension. First for a specific latitude, then for the entire domain. ```
I will try out similar things with the pynio and rasterio backends, and get back to you. Thanks for this work! |
{ "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Vectorized lazy indexing 295838143 | |
345310488 | https://github.com/pydata/xarray/issues/1720#issuecomment-345310488 | https://api.github.com/repos/pydata/xarray/issues/1720 | MDEyOklzc3VlQ29tbWVudDM0NTMxMDQ4OA== | WeatherGod 291576 | 2017-11-17T17:33:13Z | 2017-11-17T17:33:13Z | CONTRIBUTOR | Awesome! Thanks! |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Possible regression with PyNIO data not being lazily loaded 274308380 | |
345124033 | https://github.com/pydata/xarray/issues/1720#issuecomment-345124033 | https://api.github.com/repos/pydata/xarray/issues/1720 | MDEyOklzc3VlQ29tbWVudDM0NTEyNDAzMw== | WeatherGod 291576 | 2017-11-17T02:08:50Z | 2017-11-17T02:08:50Z | CONTRIBUTOR | Is there a convenient sentinel I can check for loaded-ness? The only reason
I noticed this was I was debugging another problem with my processing of
HRRR files (~600mb each) and the memory usage shot up (did you know that
On Thu, Nov 16, 2017 at 8:57 PM, Stephan Hoyer notifications@github.com wrote:
|
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Possible regression with PyNIO data not being lazily loaded 274308380 | |
342576941 | https://github.com/pydata/xarray/issues/475#issuecomment-342576941 | https://api.github.com/repos/pydata/xarray/issues/475 | MDEyOklzc3VlQ29tbWVudDM0MjU3Njk0MQ== | WeatherGod 291576 | 2017-11-07T18:29:12Z | 2017-11-07T18:29:12Z | CONTRIBUTOR | Yeah, we need to move something forward, because the main benefit of xarray is the ability to manage datasets from multiple sources in a consistent way. And data from different sources will almost always be in different projections. My current problem that I need to solve right now is that I am ingesting model data that is in a LCC projection and ingesting radar data that is in a simple regular lat/lon grid. Both dataset objects have latitude and longitude coordinate arrays, I just need to get both datasets to have the same lat/lon grid. I guess I could continue using my old scipy-based solution (using map_coordinates() or RectBivariateSpline), but at the very least, it would make sense to have some documentation demonstrating how one might go about this very common problem, even if it is showing how to use the scipy-based tools with xarrays. If that is of interest, I can see what I can write up after I am done my immediate task. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
API design for pointwise indexing 95114700 | |
342553465 | https://github.com/pydata/xarray/issues/475#issuecomment-342553465 | https://api.github.com/repos/pydata/xarray/issues/475 | MDEyOklzc3VlQ29tbWVudDM0MjU1MzQ2NQ== | WeatherGod 291576 | 2017-11-07T17:11:49Z | 2017-11-07T17:11:49Z | CONTRIBUTOR | So, what has become the consensus for performing regridding/resampling? I see a lot of suggestions, but I have no sense of what is mature enough to use in production-level code. I also haven't seen anything in the documentation about this topic, even if it just refers people to another project. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
API design for pointwise indexing 95114700 | |
147797539 | https://github.com/pydata/xarray/pull/459#issuecomment-147797539 | https://api.github.com/repos/pydata/xarray/issues/459 | MDEyOklzc3VlQ29tbWVudDE0Nzc5NzUzOQ== | WeatherGod 291576 | 2015-10-13T18:03:56Z | 2015-10-13T18:03:56Z | CONTRIBUTOR | That's all the time I have at the moment. I do have some more notes from my old, incomplete implementation, though. I'll try to finish the review tomorrow. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
add pynio backend 94100328 | |
146976549 | https://github.com/pydata/xarray/issues/615#issuecomment-146976549 | https://api.github.com/repos/pydata/xarray/issues/615 | MDEyOklzc3VlQ29tbWVudDE0Njk3NjU0OQ== | WeatherGod 291576 | 2015-10-09T20:15:49Z | 2015-10-09T20:15:49Z | CONTRIBUTOR | hmm, good point. I wish I knew why I ended up using |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
operations with pd.to_timedelta() now fails 110726841 | |
60429213 | https://github.com/pydata/xarray/issues/268#issuecomment-60429213 | https://api.github.com/repos/pydata/xarray/issues/268 | MDEyOklzc3VlQ29tbWVudDYwNDI5MjEz | WeatherGod 291576 | 2014-10-24T18:27:30Z | 2014-10-24T18:27:30Z | CONTRIBUTOR | Note, I mean that I at first thought that collapsing variables into scalars was a useful feature, not that it would happen only for datasets and not data arrays. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
groupby reduction sometimes collapses variables into scalars 46768521 | |
60425242 | https://github.com/pydata/xarray/issues/267#issuecomment-60425242 | https://api.github.com/repos/pydata/xarray/issues/267 | MDEyOklzc3VlQ29tbWVudDYwNDI1MjQy | WeatherGod 291576 | 2014-10-24T17:58:37Z | 2014-10-24T17:58:37Z | CONTRIBUTOR | So, is the string approach I used above to grab a single day's data a bug or a feature? It is a nice short-hand, but I don't want to rely on it if it isn't intended to be a feature. Similarly, if I supply a Year-Month string, I get data for that month. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
can't use datetime or pandas datetime to index time dimension 46756880 | |
60413505 | https://github.com/pydata/xarray/issues/267#issuecomment-60413505 | https://api.github.com/repos/pydata/xarray/issues/267 | MDEyOklzc3VlQ29tbWVudDYwNDEzNTA1 | WeatherGod 291576 | 2014-10-24T16:37:26Z | 2014-10-24T16:37:26Z | CONTRIBUTOR | Gah, I am sorry, please disregard my last comment. I can't add/subtract... |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
can't use datetime or pandas datetime to index time dimension 46756880 | |
60413356 | https://github.com/pydata/xarray/issues/267#issuecomment-60413356 | https://api.github.com/repos/pydata/xarray/issues/267 | MDEyOklzc3VlQ29tbWVudDYwNDEzMzU2 | WeatherGod 291576 | 2014-10-24T16:36:18Z | 2014-10-24T16:36:18Z | CONTRIBUTOR | A bit of a further wrinkle is that date selection seems to be limited to local time only because of this limitation. Consider the following: ```
I don't know how I would (easily) slice this data array such as to grab only data for a UTC day. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
can't use datetime or pandas datetime to index time dimension 46756880 | |
60404650 | https://github.com/pydata/xarray/issues/185#issuecomment-60404650 | https://api.github.com/repos/pydata/xarray/issues/185 | MDEyOklzc3VlQ29tbWVudDYwNDA0NjUw | WeatherGod 291576 | 2014-10-24T15:37:00Z | 2014-10-24T15:37:00Z | CONTRIBUTOR | May I propose a name? xray.glasses |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Plot methods 38109425 | |
60399616 | https://github.com/pydata/xarray/issues/264#issuecomment-60399616 | https://api.github.com/repos/pydata/xarray/issues/264 | MDEyOklzc3VlQ29tbWVudDYwMzk5NjE2 | WeatherGod 291576 | 2014-10-24T15:04:23Z | 2014-10-24T15:04:23Z | CONTRIBUTOR | I should note that if an inner join is performed, then no NaNs are inserted and the arrays remain float32. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
align silently upcasts data arrays when NaNs are inserted 46745063 | |
58570858 | https://github.com/pydata/xarray/issues/214#issuecomment-58570858 | https://api.github.com/repos/pydata/xarray/issues/214 | MDEyOklzc3VlQ29tbWVudDU4NTcwODU4 | WeatherGod 291576 | 2014-10-09T20:19:12Z | 2014-10-09T20:19:12Z | CONTRIBUTOR | Ok, I think I got it (for reals this time...) ``` def bcast(spat_only, coord_names): coords = [] for i, n in enumerate(coord_names): if spat_only[n].ndim != len(spat_only.dims): # Needs new axes slices = [np.newaxis] * len(spat_only.dims) slices[i] = slice(None) else: slices = [slice(None)] * len(spat_only.dims) coords.append(spat_only[n].values[slices]) return np.broadcast_arrays(*coords) def grid_to_points2(grid, points, coord_names): if not coord_names: raise ValueError("No coordinate names provided") spat_dims = {d for n in coord_names for d in grid[n].dims} not_spatial = set(grid.dims) - spat_dims spatial_selection = {n:0 for n in not_spatial} spat_only = grid.isel(**spatial_selection)
``` Needs a lot more tests and comments and such, but I think this works. Best part is that it seems to do a very decent job of keeping memory usage low, and only operates upon the coordinates that I specify. Everything else is left alone. So, I have used this on 4-D data, picking out grid points at specified lat/lon positions, and get back a 3D result (time, level, station). And I have used this on just 2D data, getting back just a 1D result (dimension='station'). |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Pointwise indexing -- something like sel_points 40395257 | |
58568933 | https://github.com/pydata/xarray/issues/214#issuecomment-58568933 | https://api.github.com/repos/pydata/xarray/issues/214 | MDEyOklzc3VlQ29tbWVudDU4NTY4OTMz | WeatherGod 291576 | 2014-10-09T20:05:01Z | 2014-10-09T20:05:01Z | CONTRIBUTOR | Consider the following Dataset:
The latitude and longitude variables are both dependent upon xgrid_0 and ygrid_0. Meanwhile...
the latitude and longitude variables are independent of each other (they are 1-D). The variable in the first one can not be accessed directly by lat/lon values, while the MaxGust variable in the second one can. This poses some difficulties. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Pointwise indexing -- something like sel_points 40395257 | |
58565934 | https://github.com/pydata/xarray/issues/214#issuecomment-58565934 | https://api.github.com/repos/pydata/xarray/issues/214 | MDEyOklzc3VlQ29tbWVudDU4NTY1OTM0 | WeatherGod 291576 | 2014-10-09T19:43:08Z | 2014-10-09T19:43:08Z | CONTRIBUTOR | Hmmm, limitation that I just encountered. When there are dependent coordinates, the variables representing those coordinates are not the index arrays (and thus, are not "dimensions" either), so my solution is completely broken for dependent coordinates. If I were to go back to my DataArray-only solution, then I still need to correct the code to use the dimension names of the coordinate variables, and still need to fix the coordinates != dimensions issue. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Pointwise indexing -- something like sel_points 40395257 | |
58562506 | https://github.com/pydata/xarray/issues/214#issuecomment-58562506 | https://api.github.com/repos/pydata/xarray/issues/214 | MDEyOklzc3VlQ29tbWVudDU4NTYyNTA2 | WeatherGod 291576 | 2014-10-09T19:16:52Z | 2014-10-09T19:16:52Z | CONTRIBUTOR | to/from_dateframe just ate up all my memory. I think I am going to stick with my broadcasting approach... |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Pointwise indexing -- something like sel_points 40395257 | |
58558069 | https://github.com/pydata/xarray/issues/214#issuecomment-58558069 | https://api.github.com/repos/pydata/xarray/issues/214 | MDEyOklzc3VlQ29tbWVudDU4NTU4MDY5 | WeatherGod 291576 | 2014-10-09T18:47:22Z | 2014-10-09T18:47:22Z | CONTRIBUTOR | oooh, didn't realize that |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Pointwise indexing -- something like sel_points 40395257 | |
58553935 | https://github.com/pydata/xarray/issues/214#issuecomment-58553935 | https://api.github.com/repos/pydata/xarray/issues/214 | MDEyOklzc3VlQ29tbWVudDU4NTUzOTM1 | WeatherGod 291576 | 2014-10-09T18:21:16Z | 2014-10-09T18:21:16Z | CONTRIBUTOR | And, actually, the example I gave above has a bug in the dependent dimension case. This one should be much better (not fully tested yet, though): ``` def grid_to_points2(grid, points, coord_names): if not coord_names: raise ValueError("No coordinate names provided") not_spatial = set(grid.dims) - set(coord_names) spatial_selection = {n:0 for n in not_spatial} spat_only = grid.isel(*spatial_selection) coords = [] for i, n in enumerate(spat_only.dims): if spat_only[n].ndim != len(spat_only.dims): # Needs new axes slices = [np.newaxis] * len(spat_only.dims) slices[i] = slice(None) else: slices = [slice(None)] * len(spat_only.dims) coords.append(spat_only[n].values[slices]) coords = np.broadcast_arrays(coords)
``` |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Pointwise indexing -- something like sel_points 40395257 | |
58551759 | https://github.com/pydata/xarray/issues/214#issuecomment-58551759 | https://api.github.com/repos/pydata/xarray/issues/214 | MDEyOklzc3VlQ29tbWVudDU4NTUxNzU5 | WeatherGod 291576 | 2014-10-09T18:06:56Z | 2014-10-09T18:06:56Z | CONTRIBUTOR | And, I think I just realized how I could generalize it even more. Right now, |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Pointwise indexing -- something like sel_points 40395257 | |
58550741 | https://github.com/pydata/xarray/issues/214#issuecomment-58550741 | https://api.github.com/repos/pydata/xarray/issues/214 | MDEyOklzc3VlQ29tbWVudDU4NTUwNzQx | WeatherGod 291576 | 2014-10-09T18:00:33Z | 2014-10-09T18:00:33Z | CONTRIBUTOR | Oh, and it does take advantage of a bunch of python2.7 features such as dictionary comprehensions and generator statements, so... |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Pointwise indexing -- something like sel_points 40395257 | |
58550403 | https://github.com/pydata/xarray/issues/214#issuecomment-58550403 | https://api.github.com/repos/pydata/xarray/issues/214 | MDEyOklzc3VlQ29tbWVudDU4NTUwNDAz | WeatherGod 291576 | 2014-10-09T17:58:25Z | 2014-10-09T17:58:25Z | CONTRIBUTOR | Starting using the above snippet for more datasets, some with interdependent coordinates and some without (so the coordinates would be 1-d). I think I have generalized it significantly... ``` def grid_to_points(grid, points, coord_names): not_spatial = set(grid.dims) - set(coord_names) spatial_selection = {n:0 for n in not_spatial} spat_only = grid.isel(*spatial_selection) coords = [] for i, n in enumerate(spat_only.dims): if spat_only[n].ndim != len(spat_only.dims): # Needs new axes slices = [np.newaxis] * len(spat_only.dims) slices[i] = slice(None) else: slices = [slice(None)] * len(spat_only.dims) coords.append(spat_only[n].values[slices]) coords = [c.flatten() for c in np.broadcast_arrays(coords)]
``` I can still imagine some situations where this won't work, such as a requested set of dimensions that are a mix of dependent and independent variables. Currently, if the dimensions are independent, then the number of dimensions of each one is assumed to be 1 and np.newaxis is used for the others. Meanwhile, if the dimensions are dependent, then the number of dimensions for each one is assumed to be the same as the number of dependent variables and is merely flattened (the broadcast is essentially no-op). I should also note that this is technically not restricted to spatial coordinates even though the code says so. Just anything that can be represented in euclidean space. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Pointwise indexing -- something like sel_points 40395257 | |
57857522 | https://github.com/pydata/xarray/issues/214#issuecomment-57857522 | https://api.github.com/repos/pydata/xarray/issues/214 | MDEyOklzc3VlQ29tbWVudDU3ODU3NTIy | WeatherGod 291576 | 2014-10-03T20:48:35Z | 2014-10-03T20:48:35Z | CONTRIBUTOR | Just managed to implement this using your suggestion for my data:
Not entirely certain why I needed to reverse y and x in that last part, but, oh well... |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Pointwise indexing -- something like sel_points 40395257 | |
57847940 | https://github.com/pydata/xarray/issues/214#issuecomment-57847940 | https://api.github.com/repos/pydata/xarray/issues/214 | MDEyOklzc3VlQ29tbWVudDU3ODQ3OTQw | WeatherGod 291576 | 2014-10-03T19:56:16Z | 2014-10-03T19:56:16Z | CONTRIBUTOR | Unless I am missing something about xray, that selection operation could only work if |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Pointwise indexing -- something like sel_points 40395257 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
issue 20