home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

5 rows where issue = 165104458 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 3

  • shoyer 2
  • apatlpo 2
  • stale[bot] 1

author_association 3

  • CONTRIBUTOR 2
  • MEMBER 2
  • NONE 1

issue 1

  • mfdataset fails at chunking after opening · 5 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
457903796 https://github.com/pydata/xarray/issues/896#issuecomment-457903796 https://api.github.com/repos/pydata/xarray/issues/896 MDEyOklzc3VlQ29tbWVudDQ1NzkwMzc5Ng== stale[bot] 26384082 2019-01-27T09:51:12Z 2019-01-27T09:51:12Z NONE

In order to maintain a list of currently relevant issues, we mark issues as stale after a period of inactivity If this issue remains relevant, please comment here; otherwise it will be marked as closed automatically

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  mfdataset fails at chunking after opening 165104458
232115345 https://github.com/pydata/xarray/issues/896#issuecomment-232115345 https://api.github.com/repos/pydata/xarray/issues/896 MDEyOklzc3VlQ29tbWVudDIzMjExNTM0NQ== apatlpo 11750960 2016-07-12T17:18:38Z 2016-07-12T17:18:38Z CONTRIBUTOR

Along time_counter

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  mfdataset fails at chunking after opening 165104458
232115059 https://github.com/pydata/xarray/issues/896#issuecomment-232115059 https://api.github.com/repos/pydata/xarray/issues/896 MDEyOklzc3VlQ29tbWVudDIzMjExNTA1OQ== shoyer 1217238 2016-07-12T17:17:38Z 2016-07-12T17:17:38Z MEMBER

@apatlpo Along what axis do your multiple files differ?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  mfdataset fails at chunking after opening 165104458
232114342 https://github.com/pydata/xarray/issues/896#issuecomment-232114342 https://api.github.com/repos/pydata/xarray/issues/896 MDEyOklzc3VlQ29tbWVudDIzMjExNDM0Mg== apatlpo 11750960 2016-07-12T17:15:14Z 2016-07-12T17:15:14Z CONTRIBUTOR

Thanks for your answer. You'll find below an output from ncdump in order to answer the part about data arrangement (I hope ...) Otherwise, we have:

dask 0.8.2 py27_0 defaults xarray 0.7.2 py27_0 defaults netcdf4 1.1.1 np18py27_0 defaults

Looking forward for any suggestions. cheers aurelien

``` netcdf NATL60-MJM155_y2008m01d09.5d_gridT { dimensions: x = 5422 ; y = 3454 ; deptht = 300 ; time_counter = UNLIMITED ; // (1 currently) time_bounds = 2 ; variables: float nav_lat(y, x) ; nav_lat:axis = "Y" ; nav_lat:standard_name = "latitude" ; nav_lat:long_name = "Latitude" ; nav_lat:units = "degrees_north" ; nav_lat:nav_model = "grid_T" ; nav_lat:_Storage = "chunked" ; nav_lat:_ChunkSizes = 12, 5422 ; float nav_lon(y, x) ; nav_lon:axis = "X" ; nav_lon:standard_name = "longitude" ; nav_lon:long_name = "Longitude" ; nav_lon:units = "degrees_east" ; nav_lon:nav_model = "grid_T" ; nav_lon:_Storage = "chunked" ; nav_lon:_ChunkSizes = 12, 5422 ; float deptht(deptht) ; deptht:axis = "Z" ; deptht:long_name = "Vertical T levels" ; deptht:units = "m" ; deptht:positive = "down" ; deptht:_Storage = "chunked" ; deptht:_ChunkSizes = 300 ; float votemper(time_counter, deptht, y, x) ; votemper:long_name = "temperature" ; votemper:units = "degC" ; votemper:online_operation = "average" ; votemper:interval_operation = "40s" ; votemper:interval_write = "5d" ; votemper:_FillValue = 0.f ; votemper:missing_value = 0.f ; votemper:coordinates = "time_centered deptht nav_lon nav_lat" ; votemper:_Storage = "chunked" ; votemper:_ChunkSizes = 1, 1, 12, 5422 ; votemper:_DeflateLevel = 1 ; double time_centered(time_counter) ; time_centered:standard_name = "time" ; time_centered:long_name = "Time axis" ; time_centered:title = "Time" ; time_centered:calendar = "gregorian" ; time_centered:units = "seconds since 1958-01-01 00:00:00" ; time_centered:time_origin = "1958-01-01 00:00:00" ; time_centered:bounds = "time_centered_bounds" ; time_centered:_Storage = "chunked" ; time_centered:_ChunkSizes = 1 ; double time_centered_bounds(time_counter, time_bounds) ; time_centered_bounds:_Storage = "chunked" ; time_centered_bounds:_ChunkSizes = 1, 2 ; double time_counter(time_counter) ; time_counter:axis = "T" ; time_counter:standard_name = "time" ; time_counter:long_name = "Time axis" ; time_counter:title = "Time" ; time_counter:calendar = "gregorian" ; time_counter:units = "seconds since 1958-01-01 00:00:00" ; time_counter:time_origin = "1958-01-01 00:00:00" ; time_counter:bounds = "time_counter_bounds" ; time_counter:_Storage = "chunked" ; time_counter:_ChunkSizes = 1 ; double time_counter_bounds(time_counter, time_bounds) ; time_counter_bounds:_Storage = "chunked" ; time_counter_bounds:_ChunkSizes = 1, 2 ; float vosaline(time_counter, deptht, y, x) ; vosaline:long_name = "salinity" ; vosaline:units = "psu" ; vosaline:online_operation = "average" ; vosaline:interval_operation = "40s" ; vosaline:interval_write = "5d" ; vosaline:_FillValue = 0.f ; vosaline:missing_value = 0.f ; vosaline:coordinates = "time_centered deptht nav_lon nav_lat" ; vosaline:_Storage = "chunked" ; vosaline:_ChunkSizes = 1, 1, 12, 5422 ; vosaline:_DeflateLevel = 1 ; float sossheig(time_counter, y, x) ; sossheig:long_name = "sea surface height" ; sossheig:units = "m" ; sossheig:online_operation = "average" ; sossheig:interval_operation = "40s" ; sossheig:interval_write = "5d" ; sossheig:_FillValue = 0.f ; sossheig:missing_value = 0.f ; sossheig:coordinates = "time_centered nav_lon nav_lat" ; sossheig:_Storage = "chunked" ; sossheig:_ChunkSizes = 1, 12, 5422 ;

// global attributes: :description = "ocean T grid variables" ; :conventions = "CF-1.1" ; :production = "An IPSL model" ; :start_date = 20040101 ; :output_frequency = "5d" ; :CONFIG = "NATL60" ; :CASE = "MJM155" ; :_Format = "netCDF-4" ; } ```

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  mfdataset fails at chunking after opening 165104458
232101159 https://github.com/pydata/xarray/issues/896#issuecomment-232101159 https://api.github.com/repos/pydata/xarray/issues/896 MDEyOklzc3VlQ29tbWVudDIzMjEwMTE1OQ== shoyer 1217238 2016-07-12T16:28:40Z 2016-07-12T16:28:40Z MEMBER

This error indicates that Python is running out of memory. Dask (which we use with mfdataset) can help with that, but it doesn't always solve the issue.

How is the data arranged in each of the input files? What version of dask are you using? This might be an issue with dask.array not fusing calls to __getitem__ and loading entire files into memory.

cc @mrocklin @jcrist

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  mfdataset fails at chunking after opening 165104458

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 15.231ms · About: xarray-datasette