home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

12 rows where issue = 1723010051 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 3

  • mathause 6
  • gkb999 5
  • welcome[bot] 1

author_association 2

  • MEMBER 6
  • NONE 6

issue 1

  • Nan Values never get deleted · 12 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1568557130 https://github.com/pydata/xarray/issues/7871#issuecomment-1568557130 https://api.github.com/repos/pydata/xarray/issues/7871 IC_kwDOAMm_X85dfkhK mathause 10194086 2023-05-30T14:40:50Z 2023-05-30T14:40:50Z MEMBER

I am closing this. Feel free to re-open/ or open a new issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Nan Values never get deleted 1723010051
1562734279 https://github.com/pydata/xarray/issues/7871#issuecomment-1562734279 https://api.github.com/repos/pydata/xarray/issues/7871 IC_kwDOAMm_X85dJW7H gkb999 7091088 2023-05-25T11:23:44Z 2023-05-25T11:23:44Z NONE

Yes float64 should cause less imprecision. You can convert using astype:

```python import numpy as np import xarray as xr

da = xr.DataArray(np.array([1, 2], dtype=np.float32))

da = da.astype(float) ```

As for the other problems I think you are better of asking the people over at rioxarray. However, you should first gather all the steps you did to convert the data as code. This way it is easier to see what you are actually doing.

Thanks for getting back. I did post in rioxarray and yet, the last step I mentioned isn't successful there too. I'll post the code maybe 8hrs from here(can reach out to my sys then). Thanks for all the helpful suggestions so far. Really helpful.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Nan Values never get deleted 1723010051
1562707652 https://github.com/pydata/xarray/issues/7871#issuecomment-1562707652 https://api.github.com/repos/pydata/xarray/issues/7871 IC_kwDOAMm_X85dJQbE mathause 10194086 2023-05-25T11:02:29Z 2023-05-25T11:02:29Z MEMBER

Yes float64 should cause less imprecision. You can convert using astype:

```python import numpy as np import xarray as xr

da = xr.DataArray(np.array([1, 2], dtype=np.float32))

da = da.astype(float) ```

As for the other problems I think you are better of asking the people over at rioxarray. However, you should first gather all the steps you did to convert the data as code. This way it is easier to see what you are actually doing.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Nan Values never get deleted 1723010051
1562698250 https://github.com/pydata/xarray/issues/7871#issuecomment-1562698250 https://api.github.com/repos/pydata/xarray/issues/7871 IC_kwDOAMm_X85dJOIK gkb999 7091088 2023-05-25T10:55:09Z 2023-05-25T10:55:09Z NONE

xarray handles nan values and ignores them per default - so you don't need to remove them. For example:

```python import numpy as np import xarray as xr

da = xr.DataArray([1, 2, 3, np.nan]) da.mean() ```

This is really helpful as I didn't know this before.

If you have precision problems - that might be because you have float32 values.

Which format would not cause the issue in that case float 64? If yes, can we manually convert?

I don't know what goes wrong with your lon values - that is an issue in the reprojection. You could convert them to 0...360 by using

python lon_dim = "x" new_lon = np.mod(da[lon_dim], 360) da = da.assign_coords(**{lon_dim: new_lon}) da.reindex(**{lon_dim : np.sort(da[lon_dim])})

Yeah. I have done the 180 to 360 deg conversions before. But the issue is more of with rioxarray reprojection I feel The internet data is in meters, as I wanted in degrees/lat-lon format, I converted the data from polar stereographic to wgs84. This converted the datas coordinates to degrees, latitudes are perfect. But longitude are arranged to -180 to +180 instead of 160E to 199W. I as well tried wrapping longitude to 0-360, but it should technically fall in 160-200 range while the long show all 0-360 and stretch throughout, which isn't right.

So, converting the existing gridded data (in meters) to lat-lon projection without affecting the resolution and without nan is my ultimate aim/objective. I successfully converted data to lat-lon and clipped to region but, it drastically changed the resolution like around 20 times maybe. Preserving the resolution is very imp for my work. So, that's the issue with longitudes

Thanks for your time if you went through this.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Nan Values never get deleted 1723010051
1562605326 https://github.com/pydata/xarray/issues/7871#issuecomment-1562605326 https://api.github.com/repos/pydata/xarray/issues/7871 IC_kwDOAMm_X85dI3cO mathause 10194086 2023-05-25T09:44:31Z 2023-05-25T09:44:31Z MEMBER

xarray handles nan values and ignores them per default - so you don't need to remove them. For example: ```python import numpy as np import xarray as xr

da = xr.DataArray([1, 2, 3, np.nan]) da.mean() `` If you have precision problems - that might be because you havefloat32` values.

I don't know what goes wrong with your lon values - that is an issue in the reprojection. You could convert them to 0...360 by using

```python

lon_dim = "x" new_lon = np.mod(da[lon_dim], 360) da = da.assign_coords({lon_dim: new_lon}) da.reindex({lon_dim : np.sort(da[lon_dim])})

```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Nan Values never get deleted 1723010051
1561999178 https://github.com/pydata/xarray/issues/7871#issuecomment-1561999178 https://api.github.com/repos/pydata/xarray/issues/7871 IC_kwDOAMm_X85dGjdK gkb999 7091088 2023-05-24T22:17:02Z 2023-05-24T22:17:02Z NONE

Well, that does makes sense. I want to calculate anomalies along x-y grids and I'm guessing the nan values are interfering with the results. Also, I have another question which isn't regarding Nan's. if it is right here, I may proceed. (else tag/link to other places/forums relevant). Assuming you must be knowing: I reprojected my nc file from meters to degrees Now, although the projection is right, the values of longitude aren't. python x (x) float64 -179.2 -177.7 ... 177.7 179.2 array([-179.217367, -177.65215 , -176.086933, -174.521715, -172.956498, -171.391281, -169.826063, -168.260846, -166.695629, -165.130412, -163.565194, -161.999977, -160.43476 , -158.869542, 163.565218, 165.130436, 166.695653, 168.26087 , 169.826088, 171.391305, 172.956522, 174.521739, 176.086957, 177.652174, 179.217391]) This is not how it is supposed to be: It should fall with 160-200 longitudes (post wrapping 360)

Is there a way xarray can sort this automatically or do I need to manually reset the cordinates?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Nan Values never get deleted 1723010051
1560777789 https://github.com/pydata/xarray/issues/7871#issuecomment-1560777789 https://api.github.com/repos/pydata/xarray/issues/7871 IC_kwDOAMm_X85dB5Q9 mathause 10194086 2023-05-24T09:32:46Z 2023-05-24T09:32:46Z MEMBER

Yes but there are less - so as mentioned it removes all columns/ rows with only nans, if there is at least one non-nan value the row is kept.

What is the reason that you want to get rid of the nan values?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Nan Values never get deleted 1723010051
1560588932 https://github.com/pydata/xarray/issues/7871#issuecomment-1560588932 https://api.github.com/repos/pydata/xarray/issues/7871 IC_kwDOAMm_X85dBLKE gkb999 7091088 2023-05-24T07:25:38Z 2023-05-24T07:26:40Z NONE

Can you try notnull instead of isnull - I often get the boolean array wrong in where:

python da = ds['z'] da = da.where(da.notnull(), drop=True)

Yes, I did.

As we can see, the nan values are not completely gone

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Nan Values never get deleted 1723010051
1560587282 https://github.com/pydata/xarray/issues/7871#issuecomment-1560587282 https://api.github.com/repos/pydata/xarray/issues/7871 IC_kwDOAMm_X85dBKwS mathause 10194086 2023-05-24T07:24:37Z 2023-05-24T07:24:37Z MEMBER

Can you try notnull instead of isnull - I often get the boolean array wrong in where:

python da = ds['z'] da = da.where(da.notnull(), drop=True)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Nan Values never get deleted 1723010051
1560584420 https://github.com/pydata/xarray/issues/7871#issuecomment-1560584420 https://api.github.com/repos/pydata/xarray/issues/7871 IC_kwDOAMm_X85dBKDk gkb999 7091088 2023-05-24T07:22:13Z 2023-05-24T07:22:13Z NONE

Thanks alot for responding, but,

python da = ds['z'] da = da.where(da.isnull(), drop=True) is for pciking nan values? Because the data array has all 'nan' values

when I plot: I get,

I need to use data that has no empty cells for further analysis.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Nan Values never get deleted 1723010051
1560572196 https://github.com/pydata/xarray/issues/7871#issuecomment-1560572196 https://api.github.com/repos/pydata/xarray/issues/7871 IC_kwDOAMm_X85dBHEk mathause 10194086 2023-05-24T07:12:28Z 2023-05-24T07:12:28Z MEMBER

What is the reason that you want to get rid of the nan values?

The reason they come back is that are needed to fill the grid again. The dataframe is 1D but the dataarray is 2D.

What you can try is to use where:

python da = ds['z'] da = da.where(da.isnull(), drop=True) but it will only drop the values if the entire row/ column is nan.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Nan Values never get deleted 1723010051
1560323080 https://github.com/pydata/xarray/issues/7871#issuecomment-1560323080 https://api.github.com/repos/pydata/xarray/issues/7871 IC_kwDOAMm_X85dAKQI welcome[bot] 30606887 2023-05-24T01:13:43Z 2023-05-24T01:13:43Z NONE

Thanks for opening your first issue here at xarray! Be sure to follow the issue template! If you have an idea for a solution, we would really welcome a Pull Request with proposed changes. See the Contributing Guide for more. It may take us a while to respond here, but we really value your contribution. Contributors like you help make xarray better. Thank you!

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Nan Values never get deleted 1723010051

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 35.265ms · About: xarray-datasette