home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

1 row where type = "issue" and user = 10809480 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 1

  • issue · 1 ✖

state 1

  • closed 1

repo 1

  • xarray 1
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
441222339 MDU6SXNzdWU0NDEyMjIzMzk= 2946 std interprets continents as zero not nan andytraumueller 10809480 closed 0     5 2019-05-07T13:06:32Z 2023-12-02T02:46:37Z 2023-12-02T02:46:36Z NONE      

hi there,

i couldnt find anything related yet. My issue is that I have to calculated a large dataset of time series data of world wide datasets. I always have this weird bug, that the std calculations interprets nan differently as mean caluculations.

Here is my typical code:

```python import xarray as xr import glob import numpy as np

data = xr.open_mfdataset([r"C:\Users\atraumue\Desktop\test\dt_global_allsat_phy_l4_20170101_20180115.nc",r"C:\Users\atraumue\Desktop\test\dt_global_allsat_phy_l4_20170102_20180115.nc"], parallel=True, concat_dim="time") data = data.drop("lon_bnds") data = data.drop("lat_bnds") data = data.drop("ugosa") data = data.drop("ugos") data = data.drop("sla") data = data.drop("vgos") data = data.drop("vgosa") data = data.drop("err") data = data.drop("ssh") data = data.drop("nv")

adt = data.drop("velocity")

adt.mean(dim="time", skipna=True).to_netcdf(r"C:\Users\atraumue\Desktop\calcsadt_mean_2004_2018_month5.nc") adt.std(dim="time", skipna=True,ddof=1).astype(np.float64).to_netcdf(r"C:\Users\atraumue\Desktop\calcsadt_std_2004_2018_month5.nc")

data.close() adt.close() ``` Dropbox to files: https://www.dropbox.com/sh/yuf114u143mj2l3/AABuQfC5wu4nrWDH4GsGgFyJa?dl=0

I dont know why this occures, for mean calulcations there is no problem with the continents. As a dirty work around i just overlay them.

Output of xr.show_versions()

INSTALLED VERSIONS ------------------ commit: None python: 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 18:50:55) [MSC v.1915 64 bit (AMD64)] python-bits: 64 OS: Windows OS-release: 10 machine: AMD64 processor: Intel64 Family 6 Model 63 Stepping 2, GenuineIntel byteorder: little LC_ALL: None LANG: None LOCALE: None.None libhdf5: 1.10.4 libnetcdf: 4.6.2 xarray: 0.12.1 pandas: 0.24.1 numpy: 1.15.4 scipy: 1.2.0 netCDF4: 1.4.2 pydap: None h5netcdf: 0.6.2 h5py: 2.9.0 Nio: None zarr: None cftime: 1.0.3.4 nc_time_axis: None PseudonetCDF: None rasterio: 1.0.13 cfgrib: None iris: None bottleneck: 1.2.1 dask: 1.1.1 distributed: 1.25.3 matplotlib: 3.0.2 cartopy: 0.17.0 seaborn: 0.9.0 setuptools: 40.7.3 pip: 19.0.1 conda: 4.6.14 pytest: 4.2.0 IPython: 7.2.0 sphinx: 1.8.4
{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2946/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  not_planned xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 19.238ms · About: xarray-datasette