home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 507658070 sorted by updated_at descending

✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 2

  • DocOtak 1
  • zxdawn 1

author_association 2

  • CONTRIBUTOR 1
  • NONE 1

issue 1

  • Save 'S1' array without the char_dim_name dimension · 2 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
542599383 https://github.com/pydata/xarray/issues/3407#issuecomment-542599383 https://api.github.com/repos/pydata/xarray/issues/3407 MDEyOklzc3VlQ29tbWVudDU0MjU5OTM4Mw== zxdawn 30388627 2019-10-16T08:55:01Z 2019-10-16T08:55:01Z NONE

@DocOtak Thank you for your explanation! It works well now :)

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Save 'S1' array without the char_dim_name dimension 507658070
542594876 https://github.com/pydata/xarray/issues/3407#issuecomment-542594876 https://api.github.com/repos/pydata/xarray/issues/3407 MDEyOklzc3VlQ29tbWVudDU0MjU5NDg3Ng== DocOtak 868027 2019-10-16T08:44:50Z 2019-10-16T08:44:50Z CONTRIBUTOR

Hi @zxdawn

Does this modified version of your code do what you want?: python import numpy as np import xarray as xr tstr='2019-07-25_00:00:00' Times = xr.DataArray(np.array([tstr], dtype = np.dtype(('S', 16))), dims = ['Time']) ds = xr.Dataset({'Times':Times}) ds.to_netcdf( 'test.nc', format='NETCDF4', encoding={ 'Times': { 'zlib':True, 'complevel':5, 'char_dim_name':'DateStrLen' } }, unlimited_dims={'Time':True} ) Output of ncdump: ``` netcdf test { dimensions: Time = UNLIMITED ; // (1 currently) DateStrLen = 19 ; variables: char Times(Time, DateStrLen) ; data:

Times = "2019-07-25_00:00:00" ; } ```

Some explanation of what is going on: Strings in numpy aren't the most friendly thing to work with, and the data types can be a little confusing. In your code, the "S1" data type is saying "this array has null terminated strings of length 1". That 1 in the "S1" is the string length. This resulted in you having an array of one character strings that was 19 elements long: array([[b'2', b'0', b'1', b'9', b'-', b'0', b'7', b'-', b'2', b'5', b'_', b'0', b'0', b':', b'0', b'0', b':', b'0', b'0']], dtype='|S1') vs what I think you want: array([b'2019-07-25_00:00:00'], dtype='|S19')

Since you know that your string length is going to be 19, you should tell numpy about this instead of xarray by either specifying the data type as "S19" or using the data type constructor (which I prefer): np.dtype(("S", 19))

{
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Save 'S1' array without the char_dim_name dimension 507658070

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 887.148ms · About: xarray-datasette
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows