home / github

Menu
  • GraphQL API
  • Search all tables

issue_comments

Table actions
  • GraphQL API for issue_comments

3 rows where issue = 1423948375 sorted by updated_at descending

✖
✖

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: reactions, created_at (date), updated_at (date)

user 2

  • hmaarrfk 2
  • Illviljan 1

author_association 2

  • CONTRIBUTOR 2
  • MEMBER 1

issue 1

  • Insertion speed of new dataset elements · 3 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
1296006560 https://github.com/pydata/xarray/issues/7224#issuecomment-1296006560 https://api.github.com/repos/pydata/xarray/issues/7224 IC_kwDOAMm_X85NP32g hmaarrfk 90008 2022-10-29T22:39:39Z 2022-10-29T22:39:39Z CONTRIBUTOR

xref: https://github.com/pandas-dev/pandas/pull/49393

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Insertion speed of new dataset elements 1423948375
1296006402 https://github.com/pydata/xarray/issues/7224#issuecomment-1296006402 https://api.github.com/repos/pydata/xarray/issues/7224 IC_kwDOAMm_X85NP30C hmaarrfk 90008 2022-10-29T22:39:01Z 2022-10-29T22:39:01Z CONTRIBUTOR

Ok, I don't think I have the right tools to really get to the bottom of this. The spyder profiler just seems to slowdown code too much. Any other tools to recommend?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Insertion speed of new dataset elements 1423948375
1292216344 https://github.com/pydata/xarray/issues/7224#issuecomment-1292216344 https://api.github.com/repos/pydata/xarray/issues/7224 IC_kwDOAMm_X85NBagY Illviljan 14371165 2022-10-26T15:21:54Z 2022-10-26T15:21:54Z MEMBER

I have thought a little about this as well and went and looked in my old code. Creating a data_vars dict with the data and then at the end creating the dataset seems to be the way to go:

```python import numpy as np import xarray as xr from time import perf_counter # %% Inputs names = np.core.defchararray.add("long_variable_name", np.arange(0, 100).astype(str)) time = np.array([0, 1]) coords = dict(time=time) value = np.array(["0", "b"], dtype=str) # %% Insert to Dataset with DataArray: time_start = perf_counter() ds = xr.Dataset(coords=coords) for v in names: ds[v] = xr.DataArray(data=value, coords=coords) time_end = perf_counter() time_elapsed = time_end - time_start print("Insert to Dataset with DataArray:", time_elapsed) # %% Insert to Dataset with Variable: time_start = perf_counter() ds = xr.Dataset(coords=coords) for v in names: ds[v] = xr.Variable("time", value) time_end = perf_counter() time_elapsed = time_end - time_start print("Insert to Dataset with Variable:", time_elapsed) # %% Insert to Dataset with tuple: time_start = perf_counter() ds = xr.Dataset(coords=coords) for v in names: ds[v] = ("time", value) time_end = perf_counter() time_elapsed = time_end - time_start print("Insert to Dataset with tuple:", time_elapsed) # %% Dict of DataArray then create Dataset: time_start = perf_counter() data_vars = dict() for v in names: data_vars[v] = xr.DataArray(data=value, coords=coords) ds = xr.Dataset(data_vars=data_vars, coords=coords) time_end = perf_counter() time_elapsed = time_end - time_start print("Dict of DataArrays then create Dataset:", time_elapsed) # %% Dict of Variables then create Dataset: time_start = perf_counter() data_vars = dict() for v in names: data_vars[v] = xr.Variable("time", value) ds = xr.Dataset(data_vars=data_vars, coords=coords) time_end = perf_counter() time_elapsed = time_end - time_start print("Dict of Variables then create Dataset:", time_elapsed) # %% Dict of tuples then create Dataset: time_start = perf_counter() data_vars = dict() for v in names: data_vars[v] = ("time", value) ds = xr.Dataset(data_vars=data_vars, coords=coords) time_end = perf_counter() time_elapsed = time_end - time_start print("Dict of tuples then create Dataset:", time_elapsed) ```

python Insert to Dataset with DataArray: 0.3787728999996034 Insert to Dataset with Variable: 0.3083788999997523 Insert to Dataset with tuple: 0.30018929999960164 Dict of DataArrays then create Dataset: 0.07277609999982815 Dict of Variables then create Dataset: 0.005166500000086671 Dict of tuples then create Dataset: 0.003186699999787379 # Winner! :)

{
    "total_count": 3,
    "+1": 3,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Insertion speed of new dataset elements 1423948375

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 13.439ms · About: xarray-datasette
  • Sort ascending
  • Sort descending
  • Facet by this
  • Hide this column
  • Show all columns
  • Show not-blank rows