home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

2 rows where author_association = "MEMBER" and issue = 671609109 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: created_at (date), updated_at (date)

user 2

  • shoyer 1
  • TomNicholas 1

issue 1

  • General curve fitting method · 2 ✖

author_association 1

  • MEMBER · 2 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
672987876 https://github.com/pydata/xarray/issues/4300#issuecomment-672987876 https://api.github.com/repos/pydata/xarray/issues/4300 MDEyOklzc3VlQ29tbWVudDY3Mjk4Nzg3Ng== TomNicholas 35968931 2020-08-12T16:45:23Z 2020-08-12T16:45:23Z MEMBER

@AndrewWilliams3142 fair question: what I was envisaging was taking slices along that dimension(s), performing the curve fitting once for each slice (which should parallelize through apply_ufunc), then returning the optimised fitting parameters as a DataArray/Dataset which varied along that dimension. For example:

```python

2D dataarray of surface height with x & t dependence

height_data

def pulse_shape(x, peak_height, peak_location, FWHM): return peak_height * np.exp(-((x-peak_location)/FWHM)^2.0)

returned fit_params has t dependence

fit_params = height_data.fit(pulse_shape, fit_along='x')

Plot a graph of change in peak height over t

fit_params['peak_height'].plot(x='t') ```

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  General curve fitting method 671609109
671475798 https://github.com/pydata/xarray/issues/4300#issuecomment-671475798 https://api.github.com/repos/pydata/xarray/issues/4300 MDEyOklzc3VlQ29tbWVudDY3MTQ3NTc5OA== shoyer 1217238 2020-08-10T17:06:11Z 2020-08-10T17:06:11Z MEMBER

+1 for just wrapping the existing functionality in SciPy for now. If we want a version of curve_fitthat supports dask, I would suggest implementingcurve_fit` with dask first, and then using that from xarray.

I am OK with using inspect from the standard library for determining default parameter names. inspect.signature is reasonably robust. But there should definitely be an optional argument for setting parameter names explicitly.

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  General curve fitting method 671609109

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 14.045ms · About: xarray-datasette