home / github

Menu
  • Search all tables
  • GraphQL API

issues

Table actions
  • GraphQL API for issues

3 rows where user = 4338975 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date), closed_at (date)

type 1

  • issue 3

state 1

  • closed 3

repo 1

  • xarray 3
id node_id number title user state locked assignee milestone comments created_at updated_at ▲ closed_at author_association active_lock_reason draft pull_request body reactions performed_via_github_app state_reason repo type
642832962 MDU6SXNzdWU2NDI4MzI5NjI= 4167 forward fill : transpose_coords=False NickMortimer 4338975 closed 0     3 2020-06-22T07:37:19Z 2021-02-21T22:12:51Z 2021-02-21T22:12:51Z NONE      

I'm building a DataArray and get the following future warning, but can't seem to find a way to pass transpose_coords=False to ffill to keep current behavior?

dsinput['soundspeed'] =(('runname','time','azimuth','depth','range'),(speed1[np.newaxis,np.newaxis,np.newaxis,:,:])) dsinput['soundspeed'] =dsinput['soundspeed'].ffill(dim='depth')

/local-home/mor582/miniconda3/envs/dev2/lib/python3.8/site-packages/xarray/core/missing.py:403: FutureWarning: This DataArray contains multi-dimensional coordinates. In the future, these coordinates will be transposed as well unless you specify transpose_coords=False. return apply_ufunc()

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/4167/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
336458472 MDU6SXNzdWUzMzY0NTg0NzI= 2256 xarray to zarr NickMortimer 4338975 closed 0     16 2018-06-28T03:17:51Z 2018-12-20T17:49:13Z 2018-12-20T17:49:13Z NONE      

@jhamman Hi I've been experimenting with converting Argo float profiles (http://www.argo.ucsd.edu/About_Argo.html) data to zarr as a cache for cloud processing of Argo data. One thing I've noticed is that Argo floats have each cycle (up down in the water column). The samples depths are not consistent across cycles. and there are a lot of single value attributes in the cycle file. e.g. Latitude etc. I loaded 250 cycle files from a single float and pushed them into a zarr using .to_zarr on each file putting each cycle into its own group:

cache/123456 (float id)/1(cycle)

This resulted in over 70k small files being created. small files are very inefficient on disk utilisation my data went from 10Meg to over 100 of disk utilisation.

With a straight pickle to zarr array the compression had the whole data series down to <1 MB!

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2256/reactions",
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue
337733183 MDU6SXNzdWUzMzc3MzMxODM= 2265 .to_zarr with datetime64[ns] NickMortimer 4338975 closed 0     4 2018-07-03T03:31:21Z 2018-07-04T00:25:33Z 2018-07-04T00:25:33Z NONE      

Hi I've noticed a possible inconsistency with datetime storing. ``` t=xr.open_dataset(files[0]) t['JULD_LOCATION'][0] <xarray.DataArray 'JULD_LOCATION' ()> array('2008-07-29T20:20:58.000000000', dtype='datetime64[ns]') Attributes: long_name: Julian day (UTC) of the location relative to REFERENCE_DATE... conventions: Relative julian days with decimal part (as parts of day) resolution: 0.0

t.to_zarr(r'D:\argo\argo2.zarr',mode='w')

za =zarr.open(r'D:\argo\argo2.zarr',mode='w+') za['JULD_LOCATION'].info Out[442]: Name : /JULD_LOCATION Type : zarr.core.Array Data type : float64 Shape : (197,) Chunk shape : (197,) Order : C Read-only : False Compressor : Zlib(level=1) Store type : zarr.storage.DirectoryStore No. bytes : 1576 (1.5K) No. bytes stored : 2000 (2.0K) Storage ratio : 0.8 Chunks initialized : 1/1 ```

if I try this za['JULD_LOCATION1'] =t['JULD_LOCATION'] za['JULD_LOCATION1'].info Out[444]: Name : /JULD_LOCATION1 Type : zarr.core.Array Data type : datetime64[ns] Shape : (197,) Chunk shape : (197,) Order : C Read-only : False Compressor : Zlib(level=1) Store type : zarr.storage.DirectoryStore No. bytes : 1576 (1.5K) No. bytes stored : 1742 (1.7K) Storage ratio : 0.9 Chunks initialized : 1/1

There also seems to be a problem with the actual values stored are different using the two methods ``` pd.to_datetime(za['JULD_LOCATION'][0]) Timestamp('1970-01-01 00:00:00.000021394')

pd.to_datetime(za['JULD_LOCATION1'][0]) Timestamp('2008-07-29 20:20:58') I think it is to do with the reference date time not being applied? t1['REFERENCE_DATE_TIME'] Out[459]: <xarray.DataArray 'REFERENCE_DATE_TIME' ()> array(b'19500101000000', dtype='|S14') Attributes: long_name: Date of reference for Julian days conventions: YYYYMMDDHHMISS ```

I hope this makes sense

{
    "url": "https://api.github.com/repos/pydata/xarray/issues/2265/reactions",
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  completed xarray 13221727 issue

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issues] (
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [number] INTEGER,
   [title] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [state] TEXT,
   [locked] INTEGER,
   [assignee] INTEGER REFERENCES [users]([id]),
   [milestone] INTEGER REFERENCES [milestones]([id]),
   [comments] INTEGER,
   [created_at] TEXT,
   [updated_at] TEXT,
   [closed_at] TEXT,
   [author_association] TEXT,
   [active_lock_reason] TEXT,
   [draft] INTEGER,
   [pull_request] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [state_reason] TEXT,
   [repo] INTEGER REFERENCES [repos]([id]),
   [type] TEXT
);
CREATE INDEX [idx_issues_repo]
    ON [issues] ([repo]);
CREATE INDEX [idx_issues_milestone]
    ON [issues] ([milestone]);
CREATE INDEX [idx_issues_assignee]
    ON [issues] ([assignee]);
CREATE INDEX [idx_issues_user]
    ON [issues] ([user]);
Powered by Datasette · Queries took 21.585ms · About: xarray-datasette