home / github

Menu
  • Search all tables
  • GraphQL API

issue_comments

Table actions
  • GraphQL API for issue_comments

21 rows where author_association = "CONTRIBUTOR" and user = 12307589 sorted by updated_at descending

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: issue_url, reactions, created_at (date), updated_at (date)

issue 9

  • keep_attrs for Dataset.resample and DataArray.resample 4
  • Advice on unit-aware arithmetic 4
  • keep_attrs for Dataset.resample and DataArray.resample 3
  • Consider how to deal with the proliferation of decoder options on open_dataset 3
  • Add remaining date units to conventions.py 2
  • Cannot save netcdf files with non-standard calendars 2
  • Attributes are currently kept when arrays are resampled, and not when datasets are resampled 1
  • Fix drop docstring 1
  • Dataset creation requires tuple, list treated differently 1

user 1

  • mcgibbon · 21 ✖

author_association 1

  • CONTRIBUTOR · 21 ✖
id html_url issue_url node_id user created_at updated_at ▲ author_association body reactions performed_via_github_app issue
392138523 https://github.com/pydata/xarray/issues/2176#issuecomment-392138523 https://api.github.com/repos/pydata/xarray/issues/2176 MDEyOklzc3VlQ29tbWVudDM5MjEzODUyMw== mcgibbon 12307589 2018-05-25T18:11:55Z 2018-05-25T18:11:55Z CONTRIBUTOR

Thank you for the input everyone, this discussion has been very useful! Closing this issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Advice on unit-aware arithmetic 325810810
391800114 https://github.com/pydata/xarray/issues/2176#issuecomment-391800114 https://api.github.com/repos/pydata/xarray/issues/2176 MDEyOklzc3VlQ29tbWVudDM5MTgwMDExNA== mcgibbon 12307589 2018-05-24T17:41:27Z 2018-05-24T17:41:27Z CONTRIBUTOR

@dopplershift That's a good point. It's pretty trivial to create a sympl.DataArray from an xarray.DataArray, so perhaps I should be using a decorator that will convert xarray.DataArray to sympl.DataArray whenever one is passed into a sympl call. This would be similarly easy to do in metpy. One could also write a function to convert Dataset into one that contains unit-aware DataArray objects, or an open_dataset that calls xarray.open_dataset and then does such a conversion, though I'd wonder if certain Dataset calls (e.g. mean) might undo such a conversion.

In sympl our main concerns are unit checking at the boundary of components and in properly converting units when time stepping or adding outputs of components together. Maybe sympl should only be using this DataArray subclass internally, with type conversions or wrapping when taking DataArrays into and out of its methods? That would solve a lot of our problems.

unyt may be a better choice than pint for MetPy. Like I said in Sympl we don't use pint for unit information storage, only for conversion and arithmetic, so whether it uses an ndarray subclass doesn't apply for us.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Advice on unit-aware arithmetic 325810810
391542922 https://github.com/pydata/xarray/issues/2176#issuecomment-391542922 https://api.github.com/repos/pydata/xarray/issues/2176 MDEyOklzc3VlQ29tbWVudDM5MTU0MjkyMg== mcgibbon 12307589 2018-05-24T00:10:29Z 2018-05-24T00:10:29Z CONTRIBUTOR

@shoyer that notation might work, thanks for pointing it out! Maybe we can think of a more natural name for the accessor ("with_units"? "keep_units"? "uarray"? "u"?). I find the "storage" of units as a string in attrs to be much cleaner than any other implementation I've seen so far (like implementations that have a unit container over an underlying array, or an array of unit-aware objects). It has the added benefit that this is how units are conventionally stored in netCDF files. I don't think it's necessary to use a class other than ndarray for data storage.

@kmpaul the main reason I stayed away from cf_units is that I had bad experiences trying to get it to build with its dependencies in the past. Particularly it's an issue that it depends on the Udunits C library, which requires MinGW to install on Windows and has generally been a headache for me. I'd much prefer a pure Python unit handling implementation. For Sympl, we don't care so much about time units, because time is stored using datetime objects (potentially from the cftime package for alternate calendars). This is also the way that time units are conventionally stored in xarray, once decoded.

It may make sense for us to use some kind of stand-alone unit-aware DataArray implementation. I'd just need to be convinced that yours is well-designed, thoroughly tested, and easy to install with pip. The main things concerning me about PhysArray are 1) As a container rather than subclass, it does not implement many of the methods of DataArray and 2) There are a few design choices I don't understand, like why calendar is always a property of a PhysArray even when it isn't storing a time, why cftime objects aren't used instead of units to manage time, and why the positive attribute is important enough for PhysArray to manage (I've never seen it in any data I've used, and it's easy to check if a DataArray is all positive or negative with a function call). We could discuss these by e-mail if you like (it's my username at uw.edu). Other possibilities are that I'll take the implementation we come up with and give it its own package, or that we'll collaborate on such a package.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Advice on unit-aware arithmetic 325810810
391443258 https://github.com/pydata/xarray/issues/2176#issuecomment-391443258 https://api.github.com/repos/pydata/xarray/issues/2176 MDEyOklzc3VlQ29tbWVudDM5MTQ0MzI1OA== mcgibbon 12307589 2018-05-23T18:03:27Z 2018-05-23T18:03:27Z CONTRIBUTOR

For reference, here are some of the sort of methods I've been adding that aren't currently in sympl:

def multiply(self, other):
    if isinstance(other, xr.DataArray):
        result = self._dataarray * other
        result.attrs['units'] = multiply_units(self._dataarray.attrs['units'], other.attrs['units'])
    else:
        result = self._dataarray * other
        result.attrs['units'] = self._dataarray.attrs['units']
    return result

def divide(self, other):
    print(self._dataarray, other)
    if isinstance(other, xr.DataArray):
        result = self._dataarray / other
        result.attrs['units'] = divide_units(self._dataarray.attrs['units'], other.attrs['units'])
    else:
        result = self._dataarray / other
        result.attrs['units'] = self._dataarray.attrs['units']
    return result

def add(self, other):
    result = self._dataarray + other.sympl.to_units(self._dataarray.attrs['units'])
    result.attrs['units'] = self._dataarray.attrs['units']
    return result

def subtract(self, other):
    result = self._dataarray - other.sympl.to_units(self._dataarray.attrs['units'])
    result.attrs['units'] = self._dataarray.attrs['units']
    return result
{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Advice on unit-aware arithmetic 325810810
375719745 https://github.com/pydata/xarray/issues/2008#issuecomment-375719745 https://api.github.com/repos/pydata/xarray/issues/2008 MDEyOklzc3VlQ29tbWVudDM3NTcxOTc0NQ== mcgibbon 12307589 2018-03-23T16:18:35Z 2018-03-23T16:18:35Z CONTRIBUTOR

Thanks @spencerkclark !

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cannot save netcdf files with non-standard calendars 307857984
375500252 https://github.com/pydata/xarray/issues/2008#issuecomment-375500252 https://api.github.com/repos/pydata/xarray/issues/2008 MDEyOklzc3VlQ29tbWVudDM3NTUwMDI1Mg== mcgibbon 12307589 2018-03-23T00:24:39Z 2018-03-23T00:24:39Z CONTRIBUTOR

Great! I've had two people independently come to me with this same problem in the past three weeks, so it's good to see it's being worked on.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Cannot save netcdf files with non-standard calendars 307857984
300647473 https://github.com/pydata/xarray/issues/939#issuecomment-300647473 https://api.github.com/repos/pydata/xarray/issues/939 MDEyOklzc3VlQ29tbWVudDMwMDY0NzQ3Mw== mcgibbon 12307589 2017-05-11T00:16:34Z 2017-05-11T00:16:34Z CONTRIBUTOR

It is considered poor software design to have 13 arguments in Java and other languages which do not have optional arguments. The same isn't necessarily true of Python, but I haven't seen much discussion or writing on this.

I'd much rather have pandas.read_csv the way it is right now than to have a ReadOptions object that would need to contain exactly the same documentation and be just as hard to understand as read_csv. That object would serve only to separate the documentation of the settings for read_csv from the docstring for read_csv. If you really want to cut down on arguments, open_dataset should be separated into multiple functions. I wouldn't necessarily encourage these, but some possibilities are:

  • Have a function which takes in an undecoded dataset and returns a CF-decoded dataset, instead of a decode_cf kwarg
  • Have a function which takes in an unmasked/unscaled dataset and returns a masked/scaled dataset, instead of mask_and_scale
  • Have a function which takes in a dataset with undecoded times and returns a decoded dataset, instead of decode_times
  • similarly for decode_coords, chunks, and drop_variables. Should chunks and drop_variables even exist as kwargs, given that the functions to do these to a dataset already exist?

All of that aside, the DecoderOptions object already exists if that's what you want - it's the dict.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Consider how to deal with the proliferation of decoder options on open_dataset 169274464
300640372 https://github.com/pydata/xarray/issues/939#issuecomment-300640372 https://api.github.com/repos/pydata/xarray/issues/939 MDEyOklzc3VlQ29tbWVudDMwMDY0MDM3Mg== mcgibbon 12307589 2017-05-10T23:26:57Z 2017-05-10T23:26:57Z CONTRIBUTOR

I would disagree with the form open_dataset(filename, decode_options=kwargs) over open_dataset(filename, **kwargs), because the former breaks normal Python style. It would make the documentation for the arguments somewhat awkward ("decode_options is a dictionary which can have any of the following keys [...]"). It also forces the user to use a dictionary instead of having the option to use a dictionary or the regular style of entering kwargs.

What do you mean when you say it's easier to do error checking on field names and values? The xarray implementation can still use fields instead of a dictionary, with the user saying open_dataset(filename, **kwargs) if they feel like it. I think I'm not understanding something here.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Consider how to deal with the proliferation of decoder options on open_dataset 169274464
256541189 https://github.com/pydata/xarray/issues/1062#issuecomment-256541189 https://api.github.com/repos/pydata/xarray/issues/1062 MDEyOklzc3VlQ29tbWVudDI1NjU0MTE4OQ== mcgibbon 12307589 2016-10-27T04:01:55Z 2016-10-27T04:01:55Z CONTRIBUTOR

I think using a more informative error when particularly year and month are used would be the right way to go. It would also probably be fine to require integer months/years, but Pandas also has weird behavior:

In[6]: pd.to_timedelta(1, 'M') Out[6]: Timedelta('30 days 10:29:06') In[7]: pd.to_timedelta(1.5, 'M') Out[7]: Timedelta('30 days 10:29:06')

Because of this it would take a significant rework of decode_cf_datetime in conventions.py to actually implement integer months working properly.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add remaining date units to conventions.py 185441216
256506227 https://github.com/pydata/xarray/issues/1062#issuecomment-256506227 https://api.github.com/repos/pydata/xarray/issues/1062 MDEyOklzc3VlQ29tbWVudDI1NjUwNjIyNw== mcgibbon 12307589 2016-10-26T23:27:43Z 2016-10-26T23:27:43Z CONTRIBUTOR

@jhamman It does sound sensible to have integer months accepted as a unit. However, Udunits isn't sensible (in this way), and CF conventions refer to Udunits. If we are to treat months as Udunits months, then each month is 30.42 or a similar number of days, and February 1st + 1 month is not the 1st of March.

The CF-compatible way to do it is have the length of a month be based on the length of a year for the current calendar. Even then it's not well defined since a common and leap year are different lengths...

At the very least it should be helpful to see an error message raised explaining why months and years aren't acceptable units when using them is attempted, possibly referring to this github issue.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Add remaining date units to conventions.py 185441216
237664856 https://github.com/pydata/xarray/issues/939#issuecomment-237664856 https://api.github.com/repos/pydata/xarray/issues/939 MDEyOklzc3VlQ29tbWVudDIzNzY2NDg1Ng== mcgibbon 12307589 2016-08-04T19:55:10Z 2016-08-04T19:55:10Z CONTRIBUTOR

We already have the dictionary. Users can make a decode_options dictionary, and then call what they want to with **decode_options.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Consider how to deal with the proliferation of decoder options on open_dataset 169274464
236727385 https://github.com/pydata/xarray/issues/929#issuecomment-236727385 https://api.github.com/repos/pydata/xarray/issues/929 MDEyOklzc3VlQ29tbWVudDIzNjcyNzM4NQ== mcgibbon 12307589 2016-08-01T22:28:48Z 2016-08-01T22:29:07Z CONTRIBUTOR

A simple way is to try creating a Data Array by assuming it's metadata, and if it fails then assume it's data. However, this will probably "work" too often on lists it shouldn't. A better heuristic is that if the list is of length 2 or 3, and the first element is an iterable of strings, and the second element contains data (is a list or numpy array), and the optional third element is a map/dictionary, then the list is probably metadata. You might also require that the second element isn't the same length as the first element, in the case that they're both lists (in case someone wants a 2x2 DataArray of string labels that they're constructing from a list of lists).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Dataset creation requires tuple, list treated differently 168754274
226384689 https://github.com/pydata/xarray/pull/886#issuecomment-226384689 https://api.github.com/repos/pydata/xarray/issues/886 MDEyOklzc3VlQ29tbWVudDIyNjM4NDY4OQ== mcgibbon 12307589 2016-06-16T04:19:39Z 2016-06-16T04:20:32Z CONTRIBUTOR

Do you really want the docstring to say "scalar or list of scalars" and not "str or list of str"? Ah, though you want to be able to specify an index or a name... not sure how to word it.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Fix drop docstring 160554996
211630275 https://github.com/pydata/xarray/pull/829#issuecomment-211630275 https://api.github.com/repos/pydata/xarray/issues/829 MDEyOklzc3VlQ29tbWVudDIxMTYzMDI3NQ== mcgibbon 12307589 2016-04-18T23:31:26Z 2016-04-18T23:31:26Z CONTRIBUTOR

Appveyor build failed for some reason when trying to set up Miniconda on Windows 32-bit with Python 2.7. The 64-bit build of Python 3.4 passed.

Downloading Miniconda-3.7.3-Windows-x86.exe from http://repo.continuum.io/miniconda/Miniconda-3.7.3-Windows-x86.exe File saved at C:\projects\xray\Miniconda-3.7.3-Windows-x86.exe Installing C:\projects\xray\Miniconda-3.7.3-Windows-x86.exe to C:\Python27-conda32 C:\projects\xray\Miniconda-3.7.3-Windows-x86.exe /S /D=C:\Python27-conda32 Start-Process : This command cannot be run due to the error: The specified executable is not a valid application for this OS platform..

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  keep_attrs for Dataset.resample and DataArray.resample 148903579
211541244 https://github.com/pydata/xarray/pull/829#issuecomment-211541244 https://api.github.com/repos/pydata/xarray/issues/829 MDEyOklzc3VlQ29tbWVudDIxMTU0MTI0NA== mcgibbon 12307589 2016-04-18T19:27:27Z 2016-04-18T19:27:27Z CONTRIBUTOR

@shoyer I've done the clean-ups you suggested, apart from for-looping tests for the reasons I mentioned in the line note. I hope my "what's new" additions are appropriate.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  keep_attrs for Dataset.resample and DataArray.resample 148903579
210955565 https://github.com/pydata/xarray/pull/829#issuecomment-210955565 https://api.github.com/repos/pydata/xarray/issues/829 MDEyOklzc3VlQ29tbWVudDIxMDk1NTU2NQ== mcgibbon 12307589 2016-04-17T04:45:53Z 2016-04-17T04:46:16Z CONTRIBUTOR

The idea is that if a dataset has an attribute, it is making a claim about that data. xarray can't guarantee that claims the attributes make about the data remain valid after operating on that data, so it shouldn't retain those attributes unless the user says it can.

I may have ceilometer data that tells me whether a cloud base is detected at any point in time, with an attribute saying that 0 means no cloud detected, another attribute saying that 1 means cloud detected, and another saying that nan means some kind of error. If I resample that data using a mean or median, those attributes are no longer valid.

Or my Dataset may have an attribute saying that it was output by a certain instrument. If I save that Dataset after doing some analysis, it may give the impression to someone reading the netCDF that they're reading unprocessed instrument data, when they aren't.

Or I may want the hourly variance of a dataset, and do dataset.resample('1H', how='var'). In this case, the units are no longer valid.

It may seem like these are edge cases, but it's better to make no claims most of the time than to make bad claims some of the time.

{
    "total_count": 2,
    "+1": 2,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  keep_attrs for Dataset.resample and DataArray.resample 148903579
210925528 https://github.com/pydata/xarray/issues/828#issuecomment-210925528 https://api.github.com/repos/pydata/xarray/issues/828 MDEyOklzc3VlQ29tbWVudDIxMDkyNTUyOA== mcgibbon 12307589 2016-04-16T23:51:02Z 2016-04-16T23:51:02Z CONTRIBUTOR

@shoyer I've corrected this in a PR I'll submit shortly.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  Attributes are currently kept when arrays are resampled, and not when datasets are resampled 148902850
210923242 https://github.com/pydata/xarray/issues/825#issuecomment-210923242 https://api.github.com/repos/pydata/xarray/issues/825 MDEyOklzc3VlQ29tbWVudDIxMDkyMzI0Mg== mcgibbon 12307589 2016-04-16T23:29:06Z 2016-04-16T23:29:06Z CONTRIBUTOR

It turns out the bug was line 323 of groupby.py, _concat_shortcut silently copies the metadata of the array doing the concatenation to the result. I've removed that line and now the tests are passing.

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  keep_attrs for Dataset.resample and DataArray.resample 148765426
210711180 https://github.com/pydata/xarray/issues/825#issuecomment-210711180 https://api.github.com/repos/pydata/xarray/issues/825 MDEyOklzc3VlQ29tbWVudDIxMDcxMTE4MA== mcgibbon 12307589 2016-04-16T01:58:53Z 2016-04-16T02:00:28Z CONTRIBUTOR

It turns out that in addition, first and last in ops don't accept keep_attrs as a keyword argument, so right now they always preserve attributes. A side effect of this is that the keep_attrs arguments passed around by _first_and_last and whatnot in groupby actually don't do anything (though their default value, True, reflects what happens).

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  keep_attrs for Dataset.resample and DataArray.resample 148765426
210701860 https://github.com/pydata/xarray/issues/825#issuecomment-210701860 https://api.github.com/repos/pydata/xarray/issues/825 MDEyOklzc3VlQ29tbWVudDIxMDcwMTg2MA== mcgibbon 12307589 2016-04-16T01:06:27Z 2016-04-16T01:06:27Z CONTRIBUTOR

@shoyer the default keep_attrs isn't the problem here, the issue is that there is currently no keep_attrs option at all for resampling.

I've implemented a solution, but now test TestDataset.test_resample_and_first is failing. This is because for how="first" and how="last", attributes are currently kept (keep_attrs=True). This may break some code if resample is given a default of keep_attrs=False. Using a default of keep_attrs=True for how in ('first', 'last') results in the test passing.

Alternatively I could make it so the default behavior is to not pass any keep_attrs value on to the grouper function, which would keep the current defaults of those groupers. The code would be a bit uglier but it's not hard, and it would prevent breaking scripts. What do we want for the default behavior?

{
    "total_count": 0,
    "+1": 0,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  keep_attrs for Dataset.resample and DataArray.resample 148765426
210695712 https://github.com/pydata/xarray/issues/825#issuecomment-210695712 https://api.github.com/repos/pydata/xarray/issues/825 MDEyOklzc3VlQ29tbWVudDIxMDY5NTcxMg== mcgibbon 12307589 2016-04-16T00:21:44Z 2016-04-16T00:21:44Z CONTRIBUTOR

@pwolfram I use xarray within a wrapper for my own work, and have already written this transfer-attributes functionality into that for my short-term solution. But it makes sense to have the same keep_attrs flag that many other xarray functions have.

@jhamman I'll try to put the PR together.

{
    "total_count": 1,
    "+1": 1,
    "-1": 0,
    "laugh": 0,
    "hooray": 0,
    "confused": 0,
    "heart": 0,
    "rocket": 0,
    "eyes": 0
}
  keep_attrs for Dataset.resample and DataArray.resample 148765426

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [issue_comments] (
   [html_url] TEXT,
   [issue_url] TEXT,
   [id] INTEGER PRIMARY KEY,
   [node_id] TEXT,
   [user] INTEGER REFERENCES [users]([id]),
   [created_at] TEXT,
   [updated_at] TEXT,
   [author_association] TEXT,
   [body] TEXT,
   [reactions] TEXT,
   [performed_via_github_app] TEXT,
   [issue] INTEGER REFERENCES [issues]([id])
);
CREATE INDEX [idx_issue_comments_issue]
    ON [issue_comments] ([issue]);
CREATE INDEX [idx_issue_comments_user]
    ON [issue_comments] ([user]);
Powered by Datasette · Queries took 1359.333ms · About: xarray-datasette