home / github

Menu
  • GraphQL API
  • Search all tables

issues

Table actions
  • GraphQL API for issues

1 row where type = "issue" and user = 31394655 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 1

  • issue · 1 ✖

state 1

  • closed 1

repo 1

  • xarray 1
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
1647805851 I_kwDOAMm_X85iN4Wb 7700 Losing data when reading/converting GRIB2 files to netCDF using `open_dataset`/`to_netcdf` methods mmgamboa 31394655 closed 0     2 2023-03-30T15:00:34Z 2023-03-30T17:03:15Z 2023-03-30T17:03:15Z NONE      

What is your issue?

Hi all,

I have data on GRIB2 format file and I want to convert it to netCDF format. The original dataset (confirmed by using pygrib package) has 12 messages: 6 different isobaric levels each with 2 variables (average and maximum) but when I convert the files using xarray I miss 6 out of 12 messages.

The messages of the original file are pygrib.open('filename.grib2').read():

[1:Relative clear air turbulence (RCAT):% (instant):regular_ll:isobaricInhPa:level 15000 Pa:fcst time 6 hrs:from 202001080600, 2:Relative clear air turbulence (RCAT):% (instant):regular_ll:isobaricInhPa:level 15000 Pa:fcst time 6 hrs:from 202001080600, 3:Relative clear air turbulence (RCAT):% (instant):regular_ll:isobaricInhPa:level 20000 Pa:fcst time 6 hrs:from 202001080600, 4:Relative clear air turbulence (RCAT):% (instant):regular_ll:isobaricInhPa:level 20000 Pa:fcst time 6 hrs:from 202001080600, 5:Relative clear air turbulence (RCAT):% (instant):regular_ll:isobaricInhPa:level 25000 Pa:fcst time 6 hrs:from 202001080600, 6:Relative clear air turbulence (RCAT):% (instant):regular_ll:isobaricInhPa:level 25000 Pa:fcst time 6 hrs:from 202001080600, 7:Relative clear air turbulence (RCAT):% (instant):regular_ll:isobaricInhPa:level 30000 Pa:fcst time 6 hrs:from 202001080600, 8:Relative clear air turbulence (RCAT):% (instant):regular_ll:isobaricInhPa:level 30000 Pa:fcst time 6 hrs:from 202001080600, 9:Relative clear air turbulence (RCAT):% (instant):regular_ll:isobaricInhPa:level 35000 Pa:fcst time 6 hrs:from 202001080600, 10:Relative clear air turbulence (RCAT):% (instant):regular_ll:isobaricInhPa:level 35000 Pa:fcst time 6 hrs:from 202001080600, 11:Relative clear air turbulence (RCAT):% (instant):regular_ll:isobaricInhPa:level 40000 Pa:fcst time 6 hrs:from 202001080600, 12:Relative clear air turbulence (RCAT):% (instant):regular_ll:isobaricInhPa:level 40000 Pa:fcst time 6 hrs:from 202001080600]

To make the conversion I am running the following commands:

``` import xarray

data = xarray.open_dataset('filename.grib2', engine = 'cfgrib') data.to_netcdf('netcdf_file.nc') ```

and then to read it from another file I run

import netCDF4 as nc ds = nc.Dataset('netcdf_file.nc', engine = 'netcdf4')

In any case both data and ds objects have less levels (6). Here a screenshot of the data object

Is xarray losing data when reading grib2 file? Is it possible that the problem comes from the fact that the original messages are the same for a given isobaric level? In that case, can I rewrite the messages by adding a flag that specifies that one message is for the average (ave) and another is for the maximum (max) to the CAT parameter?

Thanks in advance, Martín Gamboa

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/7700/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 2639.8ms · About: xarray-datasette