html_url,issue_url,id,node_id,user,created_at,updated_at,author_association,body,reactions,performed_via_github_app,issue https://github.com/pydata/xarray/pull/2751#issuecomment-495106994,https://api.github.com/repos/pydata/xarray/issues/2751,495106994,MDEyOklzc3VlQ29tbWVudDQ5NTEwNjk5NA==,971382,2019-05-23T07:48:39Z,2019-05-23T07:48:39Z,NONE,"@shoyer i've tested the solution provided, it works like a charm with my tests however many tests are broken on test_backends.py with cases where we lose precision i'll give you more detail.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,407746874 https://github.com/pydata/xarray/pull/2751#issuecomment-479831812,https://api.github.com/repos/pydata/xarray/issues/2751,479831812,MDEyOklzc3VlQ29tbWVudDQ3OTgzMTgxMg==,971382,2019-04-04T09:55:14Z,2019-04-04T09:55:14Z,NONE,"@shoyer sorry for the delayed response. `dtypes.result_type(1, np.float32(1))` returns `dtype('float64')` That's what makes this behaviour for Python's `int` and `float` Keeping the consistency would then require testing if `scale_factor*var_dtype + add_offset` fit in `var_dtype` in case of Python's `int` and `float`. Correct me if i'm wrong but this is a bit hard to do without knowing max,min values to avoid overlapping Evaluating those could break performance if only used for encoding and decoding. Do have any idea how this could be acheived ? or is it simpler to keep it as it is ? ","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,407746874 https://github.com/pydata/xarray/pull/2751#issuecomment-473296942,https://api.github.com/repos/pydata/xarray/issues/2751,473296942,MDEyOklzc3VlQ29tbWVudDQ3MzI5Njk0Mg==,971382,2019-03-15T14:00:34Z,2019-03-15T14:00:34Z,NONE,@shoyer do you mean that we consider that by default when we deal with Python's `int` and `float` we cast them to `np.int64` and `np.float64` ?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,407746874 https://github.com/pydata/xarray/pull/2751#issuecomment-470881715,https://api.github.com/repos/pydata/xarray/issues/2751,470881715,MDEyOklzc3VlQ29tbWVudDQ3MDg4MTcxNQ==,971382,2019-03-08T10:29:32Z,2019-03-08T10:29:32Z,NONE,"@shoyer tests are failing but it does'nt seem to be coming from this PR, i saw the same error on other PRs as well, my tests were working fine until i made a git pull.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,407746874 https://github.com/pydata/xarray/pull/2751#issuecomment-468177645,https://api.github.com/repos/pydata/xarray/issues/2751,468177645,MDEyOklzc3VlQ29tbWVudDQ2ODE3NzY0NQ==,971382,2019-02-28T08:09:58Z,2019-02-28T08:09:58Z,NONE,@shoyer did you have a look at this ?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,407746874 https://github.com/pydata/xarray/pull/2751#issuecomment-466290710,https://api.github.com/repos/pydata/xarray/issues/2751,466290710,MDEyOklzc3VlQ29tbWVudDQ2NjI5MDcxMA==,971382,2019-02-22T06:38:32Z,2019-02-22T06:38:59Z,NONE,"@shoyer i changed the implementation, and took into consideration your comments. now returning largest type takes place only when decoding. i added a test with all types of scale factor, add_offset and variable.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,407746874 https://github.com/pydata/xarray/pull/2751#issuecomment-464749511,https://api.github.com/repos/pydata/xarray/issues/2751,464749511,MDEyOklzc3VlQ29tbWVudDQ2NDc0OTUxMQ==,971382,2019-02-18T14:25:00Z,2019-02-18T14:25:25Z,NONE,"@shoyer now scale_factor and add_offset are taken into account when encoding and decoding data. if none of them is present or if they're not a subtype of np.generic the old behaviour takes place.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,407746874 https://github.com/pydata/xarray/pull/2751#issuecomment-463097712,https://api.github.com/repos/pydata/xarray/issues/2751,463097712,MDEyOklzc3VlQ29tbWVudDQ2MzA5NzcxMg==,971382,2019-02-13T08:01:39Z,2019-02-13T08:01:39Z,NONE,@shoyer yes sure i'll update the pull request with the mentioned modifications.,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,407746874 https://github.com/pydata/xarray/pull/2751#issuecomment-462681656,https://api.github.com/repos/pydata/xarray/issues/2751,462681656,MDEyOklzc3VlQ29tbWVudDQ2MjY4MTY1Ng==,971382,2019-02-12T09:21:59Z,2019-02-12T09:22:12Z,NONE,"@shoyer the logic is now propagated down to _choose_float_dtype inside CFScaleOffsetCoder, please let me know what you think.","{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,407746874 https://github.com/pydata/xarray/pull/2751#issuecomment-461761945,https://api.github.com/repos/pydata/xarray/issues/2751,461761945,MDEyOklzc3VlQ29tbWVudDQ2MTc2MTk0NQ==,971382,2019-02-08T10:41:34Z,2019-02-08T10:41:34Z,NONE,@shoyer did you have a look at this ?,"{""total_count"": 0, ""+1"": 0, ""-1"": 0, ""laugh"": 0, ""hooray"": 0, ""confused"": 0, ""heart"": 0, ""rocket"": 0, ""eyes"": 0}",,407746874