home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

4 rows where type = "issue" and user = 16919188 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 1

  • issue · 4 ✖

state 1

  • closed 4

repo 1

  • xarray 4
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
185709414 MDU6SXNzdWUxODU3MDk0MTQ= 1064 Differences on datetime values appears after writing reindexed variable on netCDF file Scheibs 16919188 closed 0     12 2016-10-27T15:54:34Z 2023-09-24T15:05:27Z 2023-09-24T15:05:27Z NONE      

In my Dataset i've got a time serie coordinate who begins like this

<xarray.DataArray 'time' (time: 10)> array(['2014-02-15T00:00:00.000000000+0100', '2014-02-15T18:10:00.000000000+0100', '2014-02-16T18:10:00.000000000+0100', '2014-02-17T18:10:00.000000000+0100', '2014-02-18T18:10:00.000000000+0100', '2014-02-19T18:10:00.000000000+0100', '2014-02-20T18:10:00.000000000+0100', '2014-02-21T18:10:00.000000000+0100', '2014-02-22T00:00:00.000000000+0100', '2014-02-23T00:00:00.000000000+0100'], dtype='datetime64[ns]') Coordinates: * time (time) datetime64[ns] 2014-02-14T23:00:00 2014-02-15T17:10:00 ...

And all is ok when I write and re-open the netdcdf file

Then i try to add to this dataset a reindexed variable like this

da["MeanRainfallHeigh"] = rain.reindex(time =da.time).fillna(0)

Everything is still good for the writing, but when I reopen the netcdf file, the time values are modified for the minutes part.

<xarray.DataArray 'time' (time: 10)> array(['2014-02-15T00:00:00.000000000+0100', '2014-02-15T18:00:00.000000000+0100', '2014-02-16T18:00:00.000000000+0100', '2014-02-17T18:00:00.000000000+0100', '2014-02-18T18:00:00.000000000+0100', '2014-02-19T18:00:00.000000000+0100', '2014-02-20T18:00:00.000000000+0100', '2014-02-21T18:00:00.000000000+0100', '2014-02-22T00:00:00.000000000+0100', '2014-02-23T00:00:00.000000000+0100'], dtype='datetime64[ns]') Coordinates: * time (time) datetime64[ns] 2014-02-14T23:00:00 2014-02-15T17:00:00 ...

Thanks!

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/1064/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
339909651 MDU6SXNzdWUzMzk5MDk2NTE= 2275 IPython crash after xarray.opendataset : free():invalid next size (fast) Scheibs 16919188 closed 0     9 2018-07-10T16:06:01Z 2018-09-18T16:25:32Z 2018-07-11T08:55:41Z NONE      

I tried to open a 40 Mo netcdf file with xarray.open_dataset() but i got a Ipython crash with free():invalid next size (fast) it's works fine with smaller files ( 10 Mo) , and it's works with the 40 Mo file on my other computer (Windows RAM 32Gb)

This is some informations about my configuration :

`conda info`

``` active environment : None conda version : 4.5.7 conda-build version : 3.0.19 python version : 2.7.13.final.0 base environment : /local01/appli/anaconda2 (writable) channel URLs : https://conda.anaconda.org/conda-forge/linux-64 https://conda.anaconda.org/conda-forge/noarch http://conda.binstar.org/mutirri/linux-64 http://conda.binstar.org/mutirri/noarch http://repo.continuum.io/pkgs/free/linux-64 http://repo.continuum.io/pkgs/free/noarch https://repo.anaconda.com/pkgs/main/linux-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/free/linux-64 https://repo.anaconda.com/pkgs/free/noarch https://repo.anaconda.com/pkgs/r/linux-64 https://repo.anaconda.com/pkgs/r/noarch https://repo.anaconda.com/pkgs/pro/linux-64 https://repo.anaconda.com/pkgs/pro/noarch platform : linux-64 user-agent : conda/4.5.7 requests/2.14.2 CPython/2.7.13 Linux/3.16.0-4-amd64 debian/8 glibc/2.19 ```

`lscpu`

``` RAM : 64 GB Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 24 On-line CPU(s) list: 0-23 Thread(s) per core: 2 Core(s) per socket: 6 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 63 Model name: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz Stepping: 2 CPU MHz: 1297.781 CPU max MHz: 3200.0000 CPU min MHz: 1200.0000 BogoMIPS: 4789.90 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 15360K NUMA node0 CPU(s): 0-5,12-17 NUMA node1 CPU(s): 6-11,18-23 ```

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2275/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
349493856 MDU6SXNzdWUzNDk0OTM4NTY= 2361 Indexing dataset with dimension name stored in python variable Scheibs 16919188 closed 0     1 2018-08-10T12:17:48Z 2018-08-10T15:57:16Z 2018-08-10T15:57:16Z NONE      

I try to work with some netCDF file who can have various dimensions names like NTime0, NTime1 ... Is there a way to indexing dataset with dimension name stored in python variable ?

python timesel = ['2017-08-09','2017-01-10'] for dim in ds.dims : if dim.find("Time")>=0 : ds = ds.sel(dim=timesel) **ValueError: dimensions or multi-index levels ['dim'] do not exist**

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2361/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
129150619 MDU6SXNzdWUxMjkxNTA2MTk= 729 Cannot write dask Dataset to NetCDF file Scheibs 16919188 closed 0     18 2016-01-27T13:55:56Z 2016-10-28T07:25:29Z 2016-10-28T07:25:29Z NONE      

I have a xarray Dataset created using dask, which i would like to write on my disk. The size of this dataset is 12GB, and my computer has 64GB of memory, but when I run the 'to_netcdf' command, my memory crashes.

``` python type(nc) Out[5]: xray.core.dataset.Dataset

nc.chunks Out[6]: Frozen(SortedKeysDict({'Denree': (19,), u'NIsoSource': (10, 10, 10, 10, 10, 6), 'Pop': (6,), u'DimJ0': (20, 17), u'DimK0': (1,), u'time': (50, 50, 50, 50, 3), u'DimI0': (15,), 'TypeDose': (2,)}))

nc.nbytes * (2 ** -30) Out[7]: 12.4569926 ```

I don't understand what i'm doing wrong, so thanks for your help.

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/729/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 882.683ms · About: xarray-datasette