home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where issue = 290023410 and user = 1217238 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 1

  • shoyer · 2 ✖

issue 1

  • How to broadcast along dayofyear · 2 ✖

author_association 1

  • MEMBER 2
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
417694660 https://github.com/pydata/xarray/issues/1844#issuecomment-417694660 https://api.github.com/repos/pydata/xarray/issues/1844 MDEyOklzc3VlQ29tbWVudDQxNzY5NDY2MA== shoyer 1217238 2018-08-31T15:09:56Z 2018-08-31T15:09:56Z MEMBER

@chiaral You should take a look at CFTimeIndex which specifically was designed to solve this problem: http://xarray.pydata.org/en/stable/time-series.html#non-standard-calendars-and-dates-outside-the-timestamp-valid-range

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  How to broadcast along dayofyear 290023410
359129344 https://github.com/pydata/xarray/issues/1844#issuecomment-359129344 https://api.github.com/repos/pydata/xarray/issues/1844 MDEyOklzc3VlQ29tbWVudDM1OTEyOTM0NA== shoyer 1217238 2018-01-20T00:49:33Z 2018-01-20T00:49:56Z MEMBER

You can do this in a single step with xarray.apply_ufunc(), which is a sort of more flexible/powerful interface to xarray's broadcasting arithmetic. Extending the toy weather example from the docs: ```python import xarray as xr import numpy as np import pandas as pd import seaborn as sns # pandas aware plotting library

np.random.seed(123)

times = pd.date_range('2000-01-01', '2001-12-31', name='time') annual_cycle = np.sin(2 * np.pi * (np.array(times.dayofyear) / 365.25 - 0.28))

base = 10 + 15 * annual_cycle.reshape(-1, 1) tmin_values = base + 3 * np.random.randn(annual_cycle.size, 3) tmax_values = base + 10 + 3 * np.random.randn(annual_cycle.size, 3)

ds = xr.Dataset({'tmin': (('time', 'location'), tmin_values), 'tmax': (('time', 'location'), tmax_values)},((62, 3), (3,), (3,)) {'time': times, 'location': ['IA', 'IN', 'IL']})

new code

ds_mean = ds.groupby('time.month').mean('time') ds_std = ds.groupby('time.month').std('time')

xarray.apply_ufunc(lambda x, m, s: (x - m) / s, ds.groupby('time.month'), ds_mean, ds_std) ```

The other way (about twice as slow) is to chain two calls to groupby(): python (ds.groupby('time.month') - ds_mean).groupby('time.month') / ds_std

I'll mark this as a documentation issue in case anyone wants to add an example to the docs.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  How to broadcast along dayofyear 290023410

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 214.467ms · About: xarray-datasette