issues: 1310167771
This data as json
id | node_id | number | title | user | state | locked | assignee | milestone | comments | created_at | updated_at | closed_at | author_association | active_lock_reason | draft | pull_request | body | reactions | performed_via_github_app | state_reason | repo | type |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1310167771 | I_kwDOAMm_X85OF5Lb | 6814 | Memory shortage | 93350809 | closed | 0 | 1 | 2022-07-19T22:58:02Z | 2022-07-20T01:23:14Z | 2022-07-20T01:23:14Z | NONE | What is your issue?Hi, As part of my project, I need to extract raster values in a one-kilometer buffer of many points. I used "xarray.open_rasterio" function to open the GeoTIFF file. Then I used "ds.rio.clip" to clip the raster at the buffer. My raster is 30*30 meters for the whole contiguous US (more than19 GB). I used Hight Performance Computing clusters with 192 GB of ram, but I kept receiving memory errors. Has anyone known an efficient way to clip over a raster? Here is the code that I use: `FPA_FOD_NLCD = geemap.csv_to_gdf(in_csv = 'file.csv', latitude = 'LATITUDE', longitude = 'LONGITUDE') FPA_FOD_NLCD = FPA_FOD_NLCD.to_crs(crs = 'EPSG:26910') FPA_FOD_NLCD['Land_Cover_buffer'] = None NLCD = xr.open_rasterio(filename = 'NLCD.tif', masked = True) buffer = FPA_FOD_NLCD.buffer(distance = 1000, resolution = 6) buffer_crs = buffer.crs buffer = buffer.geometry.apply(mapping) for i in range(len(FPA_FOD_NLCD)): buffer_NLCD = NLCD.rio.clip(buffer[i:i+1], crs = buffer_crs, drop = True, from_disk = True).mean() FPA_FOD_NLCD.loc[i, 'Land_Cover_buffer'] = buffer_NLCD del buffer_NLCD FPA_FOD_NLCD = FPA_FOD_NLCD.drop(labels = 'geometry', axis = 1) FPA_FOD_NLCD.to_csv(path_or_buf = f'NLCD/{year}_FPA_FOD_NLCD_buf.csv', sep = ',', index = False)` |
{ "url": "https://api.github.com/repos/pydata/xarray/issues/6814/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
completed | 13221727 | issue |