html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue
https://github.com/pydata/xarray/pull/2751#issuecomment-495106994,https://api.github.com/repos/pydata/xarray/issues/2751,495106994,MDEyOklzc3VlQ29tbWVudDQ5NTEwNjk5NA==,971382,2019-05-23T07:48:39Z,2019-05-23T07:48:39Z,NONE,"@shoyer i've tested the solution provided, it works like a charm with my tests however many tests are broken on test_backends.py with cases where we lose precision i'll give you more detail.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,407746874
https://github.com/pydata/xarray/pull/2751#issuecomment-479831812,https://api.github.com/repos/pydata/xarray/issues/2751,479831812,MDEyOklzc3VlQ29tbWVudDQ3OTgzMTgxMg==,971382,2019-04-04T09:55:14Z,2019-04-04T09:55:14Z,NONE,"@shoyer sorry for the delayed response.
`dtypes.result_type(1, np.float32(1))` returns `dtype('float64')`
That's what makes this behaviour for Python's `int` and `float`
Keeping the consistency would then require testing if `scale_factor*var_dtype + add_offset` fit in `var_dtype` in case of Python's `int` and `float`.
Correct me if i'm wrong but this is a bit hard to do without knowing max,min values to avoid overlapping
Evaluating those could break performance if only used for encoding and decoding.
Do have any idea how this could be acheived ? or is it simpler to keep it as it is ? ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,407746874
https://github.com/pydata/xarray/pull/2751#issuecomment-473296942,https://api.github.com/repos/pydata/xarray/issues/2751,473296942,MDEyOklzc3VlQ29tbWVudDQ3MzI5Njk0Mg==,971382,2019-03-15T14:00:34Z,2019-03-15T14:00:34Z,NONE,@shoyer do you mean that we consider that by default when we deal with Python's `int` and `float` we cast them to `np.int64` and `np.float64` ?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,407746874
https://github.com/pydata/xarray/pull/2751#issuecomment-470881715,https://api.github.com/repos/pydata/xarray/issues/2751,470881715,MDEyOklzc3VlQ29tbWVudDQ3MDg4MTcxNQ==,971382,2019-03-08T10:29:32Z,2019-03-08T10:29:32Z,NONE,"@shoyer tests are failing but it does'nt seem to be coming from this PR, i saw the same error on other PRs as well, my tests were working fine until i made a git pull.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,407746874
https://github.com/pydata/xarray/pull/2751#issuecomment-468177645,https://api.github.com/repos/pydata/xarray/issues/2751,468177645,MDEyOklzc3VlQ29tbWVudDQ2ODE3NzY0NQ==,971382,2019-02-28T08:09:58Z,2019-02-28T08:09:58Z,NONE,@shoyer did you have a look at this ?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,407746874
https://github.com/pydata/xarray/pull/2751#issuecomment-466290710,https://api.github.com/repos/pydata/xarray/issues/2751,466290710,MDEyOklzc3VlQ29tbWVudDQ2NjI5MDcxMA==,971382,2019-02-22T06:38:32Z,2019-02-22T06:38:59Z,NONE,"@shoyer i changed the implementation, and took into consideration your comments.
now returning largest type takes place only when decoding.
i added a test with all types of scale factor, add_offset and variable.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,407746874
https://github.com/pydata/xarray/pull/2751#issuecomment-464749511,https://api.github.com/repos/pydata/xarray/issues/2751,464749511,MDEyOklzc3VlQ29tbWVudDQ2NDc0OTUxMQ==,971382,2019-02-18T14:25:00Z,2019-02-18T14:25:25Z,NONE,"@shoyer now scale_factor and add_offset are taken into account when encoding and decoding data.
if none of them is present or if they're not a subtype of np.generic the old behaviour takes place.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,407746874
https://github.com/pydata/xarray/pull/2751#issuecomment-463097712,https://api.github.com/repos/pydata/xarray/issues/2751,463097712,MDEyOklzc3VlQ29tbWVudDQ2MzA5NzcxMg==,971382,2019-02-13T08:01:39Z,2019-02-13T08:01:39Z,NONE,@shoyer yes sure i'll update the pull request with the mentioned modifications.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,407746874
https://github.com/pydata/xarray/pull/2751#issuecomment-462681656,https://api.github.com/repos/pydata/xarray/issues/2751,462681656,MDEyOklzc3VlQ29tbWVudDQ2MjY4MTY1Ng==,971382,2019-02-12T09:21:59Z,2019-02-12T09:22:12Z,NONE,"@shoyer the logic is now propagated down to _choose_float_dtype inside CFScaleOffsetCoder, please let me know what you think.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,407746874
https://github.com/pydata/xarray/pull/2751#issuecomment-461761945,https://api.github.com/repos/pydata/xarray/issues/2751,461761945,MDEyOklzc3VlQ29tbWVudDQ2MTc2MTk0NQ==,971382,2019-02-08T10:41:34Z,2019-02-08T10:41:34Z,NONE,@shoyer did you have a look at this ?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,407746874
https://github.com/pydata/xarray/issues/2304#issuecomment-451984471,https://api.github.com/repos/pydata/xarray/issues/2304,451984471,MDEyOklzc3VlQ29tbWVudDQ1MTk4NDQ3MQ==,971382,2019-01-07T16:04:11Z,2019-01-07T16:04:11Z,NONE,"Hi,
thank you for your effort into making xarray a great library.
As mentioned in the issue the discussion went on a PR in order to make xr.open_dataset parametrable.
This post is about asking you about recommendations regarding our PR.
In this case we would add a parameter to the open_dataset function called ""force_promote"" which is a boolean and False by default and thus not mandatory.
And then spread that parameter down to the function maybe_promote in dtypes.py
Where we say the following:
if dtype.itemsize <= 2 and not force_promote:
dtype = np.float32
else:
dtype = np.float64
The downside of that is that we somehow pollute the code with a parameter that is used in a specific case.
The second approach would check the value of an environment variable called ""XARRAY_FORCE_PROMOTE"" if it exists and set to true would force promoting type to float64.
please tells us which approach suits best your vision of xarray.
Regards.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,343659822
https://github.com/pydata/xarray/issues/2304#issuecomment-412492776,https://api.github.com/repos/pydata/xarray/issues/2304,412492776,MDEyOklzc3VlQ29tbWVudDQxMjQ5Mjc3Ng==,971382,2018-08-13T11:51:15Z,2018-08-13T11:51:15Z,NONE,Any updates about this ?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,343659822
https://github.com/pydata/xarray/issues/2304#issuecomment-410678021,https://api.github.com/repos/pydata/xarray/issues/2304,410678021,MDEyOklzc3VlQ29tbWVudDQxMDY3ODAyMQ==,971382,2018-08-06T11:31:00Z,2018-08-06T11:31:00Z,NONE,"As mentioned in the original issue the modification is straightforward.
Any ideas if this could be integrated to xarray anytime soon ?","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,343659822
https://github.com/pydata/xarray/issues/2304#issuecomment-407092265,https://api.github.com/repos/pydata/xarray/issues/2304,407092265,MDEyOklzc3VlQ29tbWVudDQwNzA5MjI2NQ==,971382,2018-07-23T15:10:13Z,2018-07-23T15:10:13Z,NONE,"Thank you for your quick answer.
In our case we could evaluate std dev or square sums on long lists of values and the accumulation of those small values due to float32 type could create considerable differences.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,343659822