issue_comments
4 rows where issue = 873842812 and user = 13301940 sorted by updated_at descending
This data as json, CSV (advanced)
Suggested facets: created_at (date), updated_at (date)
issue 1
- Add GitHub action for publishing artifacts to PyPI · 4 ✖
id | html_url | issue_url | node_id | user | created_at | updated_at ▲ | author_association | body | reactions | performed_via_github_app | issue |
---|---|---|---|---|---|---|---|---|---|---|---|
833482609 | https://github.com/pydata/xarray/pull/5244#issuecomment-833482609 | https://api.github.com/repos/pydata/xarray/issues/5244 | MDEyOklzc3VlQ29tbWVudDgzMzQ4MjYwOQ== | andersy005 13301940 | 2021-05-06T12:27:53Z | 2021-05-06T12:32:57Z | MEMBER | Here's the workflow visualization graph. Let me know if the current job dependency is okay... Also, someone with admin permissions on PyPI should make sure to get the necessary tokens from PyPI and TestPyPI and set them on this repo. |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add GitHub action for publishing artifacts to PyPI 873842812 | |
831509902 | https://github.com/pydata/xarray/pull/5244#issuecomment-831509902 | https://api.github.com/repos/pydata/xarray/issues/5244 | MDEyOklzc3VlQ29tbWVudDgzMTUwOTkwMg== | andersy005 13301940 | 2021-05-03T20:21:51Z | 2021-05-03T20:21:51Z | MEMBER |
I have a tendency to split a workflow into multiple jobs because it makes reasoning about the workflow easy (at least for me :)) . However, I think using a single job here would reduce overhead since the logic isn't complex to warrant a need for multiple jobs... |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add GitHub action for publishing artifacts to PyPI 873842812 | |
831408105 | https://github.com/pydata/xarray/pull/5244#issuecomment-831408105 | https://api.github.com/repos/pydata/xarray/issues/5244 | MDEyOklzc3VlQ29tbWVudDgzMTQwODEwNQ== | andersy005 13301940 | 2021-05-03T17:23:03Z | 2021-05-03T17:23:03Z | MEMBER |
👍🏽 for addressing these in separate PRs |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add GitHub action for publishing artifacts to PyPI 873842812 | |
830865392 | https://github.com/pydata/xarray/pull/5244#issuecomment-830865392 | https://api.github.com/repos/pydata/xarray/issues/5244 | MDEyOklzc3VlQ29tbWVudDgzMDg2NTM5Mg== | andersy005 13301940 | 2021-05-02T20:15:05Z | 2021-05-02T20:15:52Z | MEMBER |
+1 for updating the how to release doc in another PR... I should point out that there are steps that this action doesn't address. For instance, step 2 and step 16
How should we address these steps as part of the semi-automated release? Ccing @pydata/xarray in case they want to chime in |
{ "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 } |
Add GitHub action for publishing artifacts to PyPI 873842812 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [issue_comments] ( [html_url] TEXT, [issue_url] TEXT, [id] INTEGER PRIMARY KEY, [node_id] TEXT, [user] INTEGER REFERENCES [users]([id]), [created_at] TEXT, [updated_at] TEXT, [author_association] TEXT, [body] TEXT, [reactions] TEXT, [performed_via_github_app] TEXT, [issue] INTEGER REFERENCES [issues]([id]) ); CREATE INDEX [idx_issue_comments_issue] ON [issue_comments] ([issue]); CREATE INDEX [idx_issue_comments_user] ON [issue_comments] ([user]);
user 1