source
stringclasses
1 value
text
stringlengths
152
659k
filtering_features
stringlengths
402
437
source_other
stringlengths
440
819k
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Add max retry count to run_transaction username_0: A transaction will now only be retried 5 times at most before the original serialization failure is raised to the application. This is in order to prevent an infinite retry loop. fixes #98 <issue_comment>username_1: I feel like we should keep the current behavior as the default, not entirely sure though.
{'fraction_non_alphanumeric': 0.03535353535353535, 'fraction_numerical': 0.012626262626262626, 'mean_word_length': 5.015151515151516, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '17800589', 'n_tokens_mistral': 101, 'n_tokens_neox': 97, 'n_words': 58}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Don't error on re-exported imports in cycles. Fixes #4049. username_0: <issue_comment>username_0: So it turned out that just removing the blanket prohibition on re-exporting a node of kind `UNBOUND_IMPORTED` was all it took to make the example in #4049 (and a similar example I had independently discovered) work as expected. I had to reintroduce a specific check for a module importing from itself in order to make the `testImportFromSameModule` tests pass. Without the changes in `semanal.py`, the added test here fails (as expected) with: ``` Expected: main:2: error: Revealed type is 'def () -> m.one.One' (diff) Actual: tmp/m/two.py:1: error: Module 'm' has no attribute 'One' (diff) main:2: error: Revealed type is 'def () -> m.one.One' (diff) Alignment of first line difference: E: main:2: error: Revealed type is 'def () -> m.one.One' A: tmp/m/two.py:1: error: Module 'm' has no attribute 'One' ``` <issue_comment>username_0: Er, this isn't quite done yet; I hadn't asserted the resolved type at the right spot. On my other cyclic import example it works fully (all types resolve correctly), but on this example it leaves a type as `Any` where it shouldn't be; pushed a failing version of the test to demonstrate. I think that bogus `Any` is a better failure mode than "Module has no attribute" false positives, so I still consider this an improvement over the status quo, but I'm looking into whether we can visit import nodes again in pass 3 to resolve these types correctly. <issue_comment>username_0: Pushed my initial attempt to get the type corrected. Not working yet, will look into it more later. Any tips welcome! <issue_comment>username_1: I spent some time debugging this but could not figure it out. I'm eager to see what you come up with! <issue_comment>username_0: This passes all tests, including the added one, but in internal Instagram codebases it now generates new "{name} is not defined" errors on certain inline imports. Haven't found a simple-case repro for that yet, tracking it down. <issue_comment>username_0: Ok, this fixes the problem and causes no new errors (both in mypy test suite and in internal Instagram code). I think it's ready for an initial round of review. <issue_comment>username_0: Anything I could do by way of testing or anything else to make this more attractive for review? Not complaining, I know everyone's got plenty to do (been on the "overloaded maintainer" side plenty often), just wondering if I can help move it along somehow. <issue_comment>username_2: This is a "controversial" topic, and some refactoring of symbol table kinds will happen soon. Therefore this may wait for some time, sorry. <issue_comment>username_0: @username_2 Thanks for the explanation. This fixes a clear bug, the new behavior is unequivocally better, it's been thoroughly tested on a large real-world code base, and the PR is not complex; it doesn't change any major behaviors, just does some fixups in third pass on things that were missed in the second pass. It doesn't seem likely to add significant complexity to any upcoming refactor of symbol table kinds. Is there any strong reason to let it languish rather than just merging it? <issue_comment>username_3: @username_2 Can you give specific reasons why this PR should be postponed? If you are worried about type alias refactoring, it looks to me like the required change to this PR should be pretty minor and is not enough to block this from being merged before the type alias refactor. @username_0 Fixing issues related to import cycles like #4049 is definitely high on our list of things we want to fix in the next few months or so. <issue_comment>username_0: Thanks for the review comments! I see that this fix is going to get uglier as I make it more complete. I'm curious what the longer-term plan is to address import cycles in a more thorough way (I guess some kind of fixed-point iteration in semantic analysis might be needed?) From a practical point of view doing gradual typing in a large codebase, spurious `Any` is a preferable failure mode to false positive import errors, so I'd have some preference for improving this and merging it (and I'm willing to work on getting it merge ready). It seems like at the very least the tests will be a useful addition regardless of future refactoring. But if there are specific ways that merging this will hinder progress towards a better fix, I'm open to dropping it. <issue_comment>username_2: We discussed with @username_3 and it looks like this PR is unlikely to cause any long-term harm, and it would be OK as a temporary partial solution (provided you fix some review comments). <issue_comment>username_3: Here are a few ideas: * Create "indirect cross-module references" (perhaps a new AST node class?) during the first pass of semantic analysis. These would be dereferenced on demand during later phases of semantic analysis. We would support multiple levels of indirection. For example, assume we have `from x import y` in module `m`. This would make `m.y` an indirect reference to `x.y`. `x.y` could be another indirect reference to, say, `a.b`. When analyzing a `y` reference in `m`, we'd follow the indirect reference chain until we reach `a.b`, which is the actual target. * Create cross-module `ForwardRef` objects if there is a reference to a type in another module that hasn't been analyzed yet. We already support forward references within a module, and hopefully it wouldn't be hard to generalize these a bit. <issue_comment>username_0: Ok, I think I've addressed all the feedback so far. I fixed several new cases (member expressions, use in type annotations) and added tests for type aliases, named tuples, and incremental mode. I left TODOs for a) inheriting from a re-exported-in-cycle import (it seemed too complex to fix up in third pass) and b) name expressions in scopes other than module-level variables. I certainly won't claim that this is a complete fix, but I think it does improve behavior incrementally for many common cases, and at the very least the tests and TODOs will be a useful starting point for a more complete fix. Review and further comments much appreciated. I'll be very curious to see what form the more complete fix takes, having wrestled with this for awhile. I'm probably missing something key, but it feels to me like the mentioned approaches (cross-module references, using ForwardRef) are effectively just different ways to handle the book-keeping for unresolved cycles; they don't address the root problem with this diff: that we have to go back and fix up all kinds of things in third pass that normally should be handled in second pass, by effectively copying bits of code in from second pass. That doesn't feel like a path to "complete fix", it feels like whack-a-mole, where the asymptote is that third pass becomes a more and more "complete" copy of second pass. It seems like for a thorough and sustainable fix of these kinds of issues, some kind of scheme for fixed-point iteration in second pass will be needed instead. But I'm sure I'm missing something, and I look forward to learning a lot from seeing the better fix implemented :-) Thanks @username_3 and @username_2 for reviewing! <issue_comment>username_3: @username_0 Thanks for the updates! I'm currently prototyping what might become a cleaner fix, where we handle many exported imported names in semantic analyzer pass 2. If it works out, it might be at least a partial replacement of this PR, but your PR has been useful first step towards fixing the underlying issue. I'm going to try your test cases with my work-in-progress fix. I still don't have complete story for references to named tuples, type aliases and such with my approach. <issue_comment>username_3: @username_0 #4695 is an alternative fix to the same issue, which I believe to be somewhat more general. It handles base classes, at least, and it may be easier to generalize to fix certain other import cycle related issues. <issue_comment>username_4: This is superseded by https://github.com/python/mypy/commit/0b7597cebcfdbe038f02c1c123c164712ab47cd1, so closing. <issue_comment>username_2: Interesting that it didn't happen automatically. Looking at the previous type when it worked, it looks like it auto-closes superseded PRs only on comments like `Fixes #PR` (which is weird). <issue_comment>username_4: Yeah, I think github's syntax is a bit specific here. (I think closes may work, but not sure) <issue_comment>username_5: `close`, `fix`, and `resolve`, as well as variants like `closes`, all work: https://help.github.com/articles/closing-issues-using-keywords/
{'fraction_non_alphanumeric': 0.05192861255037421, 'fraction_numerical': 0.008635578583765112, 'mean_word_length': 4.643924626380767, 'pattern_counts': {'":': 0, '<': 22, '<?xml version=': 0, '>': 25, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '23063204', 'n_tokens_mistral': 2243, 'n_tokens_neox': 2121, 'n_words': 1366}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: multiple events for same function username_0: In jQuery you can do: ``` map.on('load moveend', dosomething); ``` This does not work in Mapbox gl js. This seems to work: ``` map.on('load', dosomething); map.on('moveend', dosomething); ``` Maybe support jQuery style? <issue_comment>username_1: I'd prefer not to support the jQuery style; this is a very minor bit of syntax sugar, and I'd bet it isn't an expectation for anyone but diehard jQuery fans. <issue_comment>username_0: @username_1 Ok, so ``` map.on('load', dosomething); map.on('moveend', dosomething); ``` is how to do it then? <issue_comment>username_2: Yes, or with chaining: ```js map .on('load', doStuff) .on('moveend', doStuff); ``` I'm also not in favor of supporting string sugar in events. We added it in Leaflet and it wasn't worth it.<issue_closed> <issue_comment>username_1: Agreed: no sugar here, chaining is the way to go for brevity. <issue_comment>username_0: Ok, thanks!
{'fraction_non_alphanumeric': 0.11735700197238659, 'fraction_numerical': 0.006903353057199211, 'mean_word_length': 4.638888888888889, 'pattern_counts': {'":': 0, '<': 8, '<?xml version=': 0, '>': 8, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '20921345', 'n_tokens_mistral': 346, 'n_tokens_neox': 328, 'n_words': 128}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Production site is deployed from an unmerged feature branch username_0: ## Problem Presumably as a result of the surge to launch in time for the [Members’ Assembly](http://iatistandard.org/en/about/join-iati/members-assembly/), the new [IATI website](http://iatistandard.org) is currently deployed from [an unmerged feature branch](https://github.com/IATI/preview-website/pull/4) (`add-page-models`) instead of [the `master` branch](https://github.com/IATI/preview-website/tree/master). Merging to `dev` and to `master` is blocked, because [linting checks fail](https://travis-ci.org/IATI/preview-website/builds/401705790). ## Proposed solution I’ve sent the following pull requests to fix linting: #203, #204, #205 (these pull requests replace #55). Once all three are reviewed and merged, it should be fine to merge #4 (`add-page-models` -> `dev`) and subsequently #22 (`dev` -> `master`). <issue_comment>username_0: Fixed in #22!<issue_closed> <issue_comment>username_1: Good to flag this @username_0 - was just going to say this can be closed once https://github.com/IATI/deployment/pull/49 is merged and a deployment is run! <issue_comment>username_0: ## Problem Presumably as a result of the surge to launch in time for the [Members’ Assembly](http://iatistandard.org/en/about/join-iati/members-assembly/), the new [IATI website](http://iatistandard.org) is currently deployed from [an unmerged feature branch](https://github.com/IATI/preview-website/pull/4) (`add-page-models`) instead of [the `master` branch](https://github.com/IATI/preview-website/tree/master). I guess this is okay, but it’s not really ideal. Merging to `dev` and to `master` is blocked, because [linting checks fail](https://travis-ci.org/IATI/preview-website/builds/401705790). ## Proposed solution I’ve sent the following pull requests to fix linting: #208, ~~#205~~, #204, #203, #201, #199 (these pull requests replace #55). Once all of these are reviewed and merged, it should be fine to merge #4 (`add-page-models` -> `dev`) and subsequently #22 (`dev` -> `master`). <issue_comment>username_0: Hadn’t spotted that one! Totally agree. Reopening – I’ll leave it with you. <issue_comment>username_0: I guess this can now be closed?<issue_closed>
{'fraction_non_alphanumeric': 0.12778993435448577, 'fraction_numerical': 0.02975929978118162, 'mean_word_length': 4.891752577319588, 'pattern_counts': {'":': 0, '<': 9, '<?xml version=': 0, '>': 13, 'https://': 7, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '19425831', 'n_tokens_mistral': 765, 'n_tokens_neox': 699, 'n_words': 248}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Add scripts folder to shellcheck username_0: Is `scripts/gen/windows-resources` still used? I didn't change it as it looks like it's not used anymore. <issue_comment>username_1: Yes, it's called by `go generate` in `cli/winresources/res_windows.go`, which is called from `scripts/build/windows`. <issue_comment>username_0: Isn't that commented out in the go file? <issue_comment>username_1: No, that's what `go:generate` lines should look like: https://blog.golang.org/generate <issue_comment>username_0: Ha! I learned something today :) <issue_comment>username_0: Hi folks. Any update on this PR? 🐱 <issue_comment>username_2: /cc @albers 👍
{'fraction_non_alphanumeric': 0.10518518518518519, 'fraction_numerical': 0.01037037037037037, 'mean_word_length': 6.268817204301075, 'pattern_counts': {'":': 0, '<': 8, '<?xml version=': 0, '>': 8, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '29085272', 'n_tokens_mistral': 214, 'n_tokens_neox': 205, 'n_words': 75}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: table handleResizeDoubleClick fix username_0: #### Fixes #3731 #### Checklist - [ ] Includes tests - [ ] Update documentation <!-- DO NOT enable CircleCI for your fork. Our build will run when you open this PR. --> #### Changes proposed in this pull request: - fixes `ColumnHeader.handleResizeDoubleClick` with enabled ColumnReordering and menuRenderer - added `bp-table-text-no-measure` class to header's maybeRenderReorderHandle - added `bp-table-text-no-measure` class to columnHeaderCell's maybeRenderDropdownMenu - modified **packages/table/src/common/utils.ts** `measureTextContentWithExclusions` logic #### Reviewers should focus on: - I searched through the code, `measureTextContentWithExclusions` is used only at `handleResizeDoubleClick`, so I assumed that it would be ok to modify it - I modified `measureTextContentWithExclusions` as it is possible to have both ColumnReordering and menuRenderer icons inside header - I removed dom manipulations at `measureTextContentWithExclusions`, and replaced it with metrics computation - added `EXCLUDED_ICON_PLACEHOLDER_WIDTH` constant to add spacing to auto-resize width at the place of that icons - `CLASSNAME_EXCLUDED_FROM_TEXT_MEASUREMENT` is exported and used at `header.tsx` and `columnHeaderCell.tsx` <issue_comment>username_1: Thanks for your interest in [palantir/blueprint](https://github.com/palantir/blueprint), @username_0! Before we can accept your pull request, you need to sign our contributor license agreement - just visit https://cla.palantir.com/ and follow the instructions. Once you sign, I'll automatically update this pull request.
{'fraction_non_alphanumeric': 0.07650926479378362, 'fraction_numerical': 0.0041841004184100415, 'mean_word_length': 5.131868131868132, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '25106350', 'n_tokens_mistral': 469, 'n_tokens_neox': 443, 'n_words': 174}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>username_0: 👷 Deploy Preview for *sleepy-meninsky-0c19cb* processing. 🔨 Explore the source changes: e1288126951560776ae8f5cf25a71157579c81e5 🔍 Inspect the deploy log: [https://app.netlify.com/sites/sleepy-meninsky-0c19cb/deploys/6188947709e8a70008241038](https://app.netlify.com/sites/sleepy-meninsky-0c19cb/deploys/6188947709e8a70008241038)
{'fraction_non_alphanumeric': 0.11021505376344086, 'fraction_numerical': 0.22849462365591397, 'mean_word_length': 7.288888888888889, 'pattern_counts': {'":': 0, '<': 2, '<?xml version=': 0, '>': 2, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '9348412', 'n_tokens_mistral': 211, 'n_tokens_neox': 159, 'n_words': 19}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: TypeError: Cannot read property 'inject' of undefined username_0: Here is my server.js which has located in root directory ``` app.get('*', (req, res) => { if (!renderer) { return res.end('waiting for compilation... refresh in a moment.') } const s = Date.now() res.setHeader("Content-Type", "text/html") const errorHandler = err => { if (err && err.code === 404) { res.status(404).end('404 | Page Not Found') } else { // Render Error Page or Redirect res.status(500).end('500 | Internal Server Error') console.error(`error during render : ${req.url}`) console.error(err) } } const context = { url: req.url } const renderStream = renderer.renderToStream(context) const bodyOpt = { body: true } const { title, htmlAttrs, bodyAttrs, link, style, script, noscript, meta } = context.meta.inject() renderStream.once('data', () => { res.write(` <!doctype html> <html data-vue-meta-server-rendered ${htmlAttrs.text()}> <head> ${meta.text()} ${title.text()} ${link.text()} ${style.text()} ${script.text()} ${noscript.text()} </head> <body ${bodyAttrs.text()}> `) }) renderStream.on('data', (chunk) => { res.write(chunk) }) renderStream.on('end', () => { res.end(` <!--<script src="/assets/vendor.bundle.js"></script> <script src="/assets/client.bundle.js"></script>--> ${script.text(bodyOpt)} </body> </html> `) }) renderStream.on('error', (error) => res.status(500).end(`<pre>${error.stack}</pre>`)) renderer.renderToStream({ url: req.url }) .on('error', errorHandler) .on('end', () => console.log(`whole request: ${Date.now() - s}ms`)) .pipe(res) }) ``` <issue_comment>username_1: Stale. Please ping if the issue still persists<issue_closed>
{'fraction_non_alphanumeric': 0.16954694068192433, 'fraction_numerical': 0.009341429238673517, 'mean_word_length': 1.9262295081967213, 'pattern_counts': {'":': 0, '<': 18, '<?xml version=': 0, '>': 25, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '27033416', 'n_tokens_mistral': 671, 'n_tokens_neox': 632, 'n_words': 137}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: shape: Querying shape size and position username_0: Request for support to query the position and size of an existing shape (and set them to new values?) One use is to allow an existing image in presentation to be characterized, so that it can be replaced with an image of the same size and resolution. Shapes of the following types can appear in Slide.shapes (`<p:spTree>`) and each will require support: - [x] sp -- Auto Shape - [x] pic -- Picture - [ ] grpSp -- Group Shape - [ ] graphicFrame -- container for table and chart shapes - [ ] cxnSp -- Connector (line) - [ ] contentPart -- has no position, but should return None instead of raising an exception <issue_comment>username_1: Has support for querying the width and height of a table been implemented? It would be useful to me because I want to position a table on my slide to be in the bottom right corner. If I could know how wide/tall my table is after I fill it with values, I could back-work the appropriate values for "left" and "top" that would place it in the bottom-right corner. Thanks! For ex, I create a table using: table = shapes.add_table(rows, cols, left, top, width, height).table Then I loop through the cells and fill each with a different value, and the size of the table will shift to accommodate. I would like to then query the final width and height of my table, to ultimately reset the left and top attributes. <issue_comment>username_1: I am working with a table and would like to query its final height after it has been populated with values. This way I will be able to adjust its position on the slide (graphicframe.top) according to how large it is. I add the table with these lines: `graphicframe = shapes.add_table(rows = 41, cols = 5, left = Inches(0), top = Inches(2), width = Inches(5.7), height = Inches(4)) table = graphicframe.table` And then loop through its cells to populate them individually. The table height changes to accommodate the input (for example, text may wrap on to 2 lines, increasing the height). graphicframe.height returns the original height which I inputted to add_table, and not the final height after data population. I found what looks to be the height of each row in the XML file (see image below), but even those correspond to the original height from add_table, not the empirical final height of the table after data population. [89209*40+89240 = Inches(4), my original height assigned to the table] ![image](https://user-images.githubusercontent.com/19732554/28181413-87361952-67d6-11e7-9b41-8bf20c05621e.png) Do you know how I can find the empirical height of my table after it is populated? Thank you!! <issue_comment>username_2: @username_1 This is a closed issue and you're asking a new question. This isn't the place for it. Please ask support questions on Stack Overflow using the 'python-pptx' tag.
{'fraction_non_alphanumeric': 0.06165569795635608, 'fraction_numerical': 0.023553862140630412, 'mean_word_length': 4.241379310344827, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '29780379', 'n_tokens_mistral': 801, 'n_tokens_neox': 755, 'n_words': 443}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: [EIP-0002] [DRAFT] Define a Wallet Exchange Format username_0: This proposal is about defining a cross-applications format to exchange (import / export) wallets. The decisions made on EIP-0001 (#1) will impact this format. **Why is it an issue ?** Because in order to ease the exchanges, we need to find : - A more cross-compatible format than the [_bitkeys_ format](https://bitcointalk.org/index.php?topic=4448.0) (`wallet.dat` file). - A short enough format to enable easy QR-Codes generation and readability. <issue_comment>username_1: `Creating our own "character"-separated format to getting rid of the quotes and brackets.` Why not just use YAML? <issue_comment>username_0: Because YAML is only parsable when correctly indented (thus can't be easily stringified), that's why APIs still use JSON instead of YAML. <issue_comment>username_0: Upon reflection, to have a still light but easy-to-handle solution, I think that the best format would be a pure array JSON: ```json [ 5, // chains count, defaulted to 0 if it's not a HD wallet "masterNodePK", // defaulted to "" if it's not a HD wallet [ // List of non-HD addresses (randomly generated in the legacy wallet) "customAddress1PK", "customAddress2PK", ..., ] ] ``` The <issue_comment>username_0: Here is the clean proposal: --- # [EIP-0002] Wallet Exchange Format This RFC describes the Wallet Exchange Format, that is, the unique format used to exchange an Electra wallet with any application requiring a full access to this wallet. ## Format The format MUST be a a valid JSON following [RFC 8259](https://tools.ietf.org/html/rfc8259) specifications, including the obligation of being encoded in UTF-8 ([RFC-3629](https://tools.ietf.org/html/rfc3629)). ## Structure The structure MUST be as described above, following its exact order: ```txt [ VERSION_INTEGER, CHAINS_COUNT_INTEGER, HIERARCHICAL_DETERMINISTIC_MASTER_NODE_PRIVATE_KEY_STRING, RANDOM_ADDRESSES_STRING_ARRAY ] ``` <issue_comment>username_2: I quickly scanned through this RFC, and while I agree that json format would be the best fit, I have some concerns about the implementation of this feature. What are we trying to solve here? Giving the possibility to someone or something else to have access to a user's wallet (regardless of how much money it contains), opens a number of issues WRT to vulnerabilities. Can you please provide an example of what we're trying to achieve/solve? Is there any other coins doing something similar? I'd like to know more. <issue_comment>username_0: I don't see any other easy way of exchanging the wallet between a desktop app and a wallet app than via a QR code, but maybe I missed a possibility ?<issue_closed> <issue_comment>username_1: @username_0 mind explaining why you closed this? <issue_comment>username_0: @username_1 Because I switched it to FINAL, so now it's on the master branch. https://github.com/Electra-project/Electra-Improvement-Proposals/blob/master/EIP-0002.md
{'fraction_non_alphanumeric': 0.07409814754631135, 'fraction_numerical': 0.017549561260968474, 'mean_word_length': 4.225806451612903, 'pattern_counts': {'":': 0, '<': 11, '<?xml version=': 0, '>': 11, 'https://': 4, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '24757039', 'n_tokens_mistral': 959, 'n_tokens_neox': 860, 'n_words': 393}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Clean up history to remove failed merge of #1666 and #1682 username_0: ## Description #1666 was merged in error, and subsequently #1682 reverted the commits of #1666. This severely polluted the commit history. There is only one additional commit on top of #1682, 0e91248e90c8b02f6bcade9bb239bba0de97c601, which can be cherry-picked after resetting the tip to 7c45aa3016afc3a3228b3c115ad3464bbca97091. This task is to force clean-up that history. <issue_comment>username_0: Done. The failed merges are removed as confirmed here: ```sh $ git log --oneline 5355bd84 (HEAD -> master, origin/master) Use define + statis_assert instead of nested MAX macro usage. 7c45aa30 Fix few CI shellcheck errors. 0cb97eba Remove all pgp_hash_t leftovers. 2d8e5bf3 Use rnp::CRC24 instead of pgp_hash_t during armor/dearmor. 95d017db Replace std::vector<pgp_hash_t> with rnp::HashList ... ```<issue_closed>
{'fraction_non_alphanumeric': 0.06581740976645435, 'fraction_numerical': 0.1040339702760085, 'mean_word_length': 4.968354430379747, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 6, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '27567977', 'n_tokens_mistral': 374, 'n_tokens_neox': 314, 'n_words': 114}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Error running docker-compose up username_0: Hi, I'm trying to run` docker-compose up` but I'm getting this error: ``` Traceback (most recent call last): File "site-packages/urllib3/connectionpool.py", line 677, in urlopen File "site-packages/urllib3/connectionpool.py", line 392, in _make_request File "http/client.py", line 1252, in request File "http/client.py", line 1298, in _send_request File "http/client.py", line 1247, in endheaders File "http/client.py", line 1026, in _send_output File "http/client.py", line 966, in send File "site-packages/docker/transport/unixconn.py", line 43, in connect FileNotFoundError: [Errno 2] No such file or directory ``` Any ideia of what could be wrong? Thank you <issue_comment>username_0: Never mind.. I didn't had docker running :( So sorry...<issue_closed> <issue_comment>username_1: @username_0 Great to hear you were able to resolve the issue!
{'fraction_non_alphanumeric': 0.10298661174047374, 'fraction_numerical': 0.035015447991761074, 'mean_word_length': 4.170212765957447, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '7063589', 'n_tokens_mistral': 360, 'n_tokens_neox': 310, 'n_words': 110}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Removing references to yui.yahooapis.com username_0: Who: NOAA security scan "Additionally, linking to external, non-approved third party content delivery networks is not recommended" see: https://esgfdev/user/add/ <issue_comment>username_0: In file: cog/templates/cog/account/user_form.html the YUI content used to be loaded from: "http://yui.yahooapis.com/2.9.0" now it's loaded from the local copy of the library at: {{ STATIC_URL }}js/yui-2.9.0 <issue_comment>username_1: I see the code change.<issue_closed>
{'fraction_non_alphanumeric': 0.10270270270270271, 'fraction_numerical': 0.016216216216216217, 'mean_word_length': 5.1098901098901095, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '21005060', 'n_tokens_mistral': 186, 'n_tokens_neox': 176, 'n_words': 57}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: cec_test_t_set.R regression username_0: cec_test_t_set.R doesn't work ```{R} Error in parse(text = lines, n = -1, srcfile = srcfile) : test_cec_t_set.R:53:3: unexpected symbol 52: c <- CEC(x = Tset, params.mix = list(fixed_covariance_cluster_param, list(fixed_spherical_cluster_param, 3), control.nstart = 100, control.eps=0.09) 53: plot ``` It was merged accidentaly to dev branch, sorry.<issue_closed>
{'fraction_non_alphanumeric': 0.12253829321663019, 'fraction_numerical': 0.0350109409190372, 'mean_word_length': 4.871794871794871, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '7609517', 'n_tokens_mistral': 190, 'n_tokens_neox': 173, 'n_words': 40}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Fix misspelled file name username_0: # Please indicate you've done the following: - [x] [Accepted the DCO](https://velero.io/docs/v1.5/code-standards/#dco-sign-off). Commits without the DCO will delay acceptance. - [x] [Created a changelog file](https://velero.io/docs/v1.5/code-standards/#adding-a-changelog) or added `/kind changelog-not-required`. - [ ] Updated the corresponding documentation in `site/content/docs/main`. <issue_comment>username_0: Looking at why the other action is failing.. <issue_comment>username_0: I had the assignees file all malformated. Ran a linter on it and fixed all of it. I think the error now is legit: `Error: HttpError: Resource not accessible by integration` - because it's running against my fork? I think this should now work, file is linter happy and the file with the wrong extension name `yaml` argh has been fixed. <issue_comment>username_1: This check is still failing. https://github.com/vmware-tanzu/velero/pull/3712/checks?check_run_id=2396132398 with what seems like a permissions issue. <issue_comment>username_0: Yes. Did you see the message above yours? <issue_comment>username_1: I missed that, thanks for pointing that out. Should we expect the checks on this PR to pass? <issue_comment>username_0: I "think" that's gonna work. Let me try this specific scenario on my own repo (the assignees file with a wrong name in the upstream) and see if 1) I get this same error and 2) if I merge it it will fix things. <issue_comment>username_2: I *think* the `Auto Request Review` job is failing because of the exact problem this PR is meant to fix, since the PRs are run off the files in the `main` branch. I re-ran it and got the same result, but at the moment I'm not able to see specific logging in the Actions UI. <issue_comment>username_0: Yes, this is my thinking as well. I'd be comfortable merging this. If @username_1 wants to hang on until I run a test on my own repo that's fine too. <issue_comment>username_1: I was eager to merge this and wanted to check if we need to wait for this check to pass. Merging
{'fraction_non_alphanumeric': 0.0743019403691434, 'fraction_numerical': 0.014671083767155703, 'mean_word_length': 4.462532299741602, 'pattern_counts': {'":': 0, '<': 11, '<?xml version=': 0, '>': 11, 'https://': 3, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '10398357', 'n_tokens_mistral': 628, 'n_tokens_neox': 606, 'n_words': 305}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: python3-CherryPy: update to 18.5.0. username_0: <issue_comment>username_0: I'm sorry it took me so long to answer. I was in Vacation. Anyway, I've done all the proposed changes. <issue_comment>username_1: Please rebase so we'll see if they still build. While here, please set pycompile_module to `jaraco-X` (or remove), `jaraco` is default now and not need to be set. <issue_comment>username_0: Done. I removed all `pycompile_module` variables
{'fraction_non_alphanumeric': 0.08488612836438923, 'fraction_numerical': 0.018633540372670808, 'mean_word_length': 5.1265822784810124, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '26769571', 'n_tokens_mistral': 161, 'n_tokens_neox': 150, 'n_words': 63}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: TF Framework ops_eager_test missing test case add username_0: 1-disable_eager_execution api test case <issue_comment>username_0: @qlzh727 Help to review the changes. Thanks.... <issue_comment>username_1: @pragyaak Please help proceeding with the next steps as I've some access issues(I'm trying to resolve). <issue_comment>username_2: @username_0 seems one of the file has been moved or deleted ` tensorflow/tensorflow/python/framework/ops_eager_test.py ` can you please check <issue_comment>username_0: @username_2 This change is closing and update PR #26379 can be find here.
{'fraction_non_alphanumeric': 0.06583072100313479, 'fraction_numerical': 0.025078369905956112, 'mean_word_length': 4.8090909090909095, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '2565676', 'n_tokens_mistral': 201, 'n_tokens_neox': 185, 'n_words': 69}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Desktop: split stable/edge release notes username_0: ### Proposed changes The pages for the release notes of Docker for Desktop are way too long, and the rhs menu oversized. Extract the edge releases from these main pages. Fixes #6103. <issue_comment>username_1: I see this, @username_0 ... I might need a few days to get to it. Just an FYI. <issue_comment>username_1: Hi, @username_0 ..... actually, if you can fix the conflicts (sorry about that), I can merge this Friday. Thanks for the thorough job here. <issue_comment>username_0: Hi @username_1! It was too much pains to completely replay my changes on top of the change from @gtardif, so rather, I have added at the end another commit that re-establishes the new releases. Cheers!
{'fraction_non_alphanumeric': 0.06751592356687898, 'fraction_numerical': 0.014012738853503185, 'mean_word_length': 4.24, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '20491485', 'n_tokens_mistral': 231, 'n_tokens_neox': 216, 'n_words': 116}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Simplify on show is kinda annoying username_0: Currently, if I write ```julia julia> using SymbolicUtils, Test; @vars a b; julia> a * b * a ((a ^ 2) * b) julia> @test a * b * a == a^2 * b Test Failed at REPL[33]:1 Expression: a * b * a == a ^ 2 * b Evaluated: ((a ^ 2) * b) == ((a ^ 2) * b) ERROR: There was an error during testing ``` Because we simplify the output of `Base.show`, errors testsets tricky to debug. notice it says `Evaluated: ((a ^ 2) * b) == ((a ^ 2) * b)` One can use `showraw` to see what the actual representation is, but I think maybe we shouldn't be simplifying on `show`. <issue_comment>username_1: Closing this since the printing is unsimplified in tests now.<issue_closed> <issue_comment>username_0: @username_1 I feel that especially since this package is trying to be generic and more agnostic about the users' usecase, simplify in show is a bad idea. I think that one major problem is that `simplify` makes interactively devising rules to manipulate a `Term` harder because the object's output at the REPL is not actually what you're trying to manipulate. ____ PS: sorry I haven't had much time for this package lately. I'm hoping to get back into it soon! <issue_comment>username_1: Ok sure, feel free to make the change by just changing the default value of simplified_show[]
{'fraction_non_alphanumeric': 0.088, 'fraction_numerical': 0.01090909090909091, 'mean_word_length': 3.9855072463768115, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 9, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '2816392', 'n_tokens_mistral': 443, 'n_tokens_neox': 412, 'n_words': 210}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: how to create dir and file usign async api? username_0: <issue_comment>username_1: Linux has no async way to create directory. You have to make directory in a ThreadPoolExecutor. ```python import asyncio import os async def mkdir_async(*args): loop = asyncio.get_running_loop() return await loop.run_in_executor(None, os.mkdir, *args) ```<issue_closed> <issue_comment>username_0: thanks for the clarification!
{'fraction_non_alphanumeric': 0.08405172413793104, 'fraction_numerical': 0.00646551724137931, 'mean_word_length': 5.2, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '4596971', 'n_tokens_mistral': 147, 'n_tokens_neox': 140, 'n_words': 49}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Settings - Alfresco username_0: Great day, I really enjoyed your work and it's already in testing. There was a doubt, I do not know Docker. There are some customizations that are performed on the following files: - share-config-custom.xml - alfresco-global.properties How could you use docker-compose with custom settings? And What is the procedure for installing AMP and JAR packages? <issue_comment>username_1: ```docker COPY assets/share-custom-config.xml ${CATALINA_HOME}/shared/classes/alfresco/ COPY assets/alfresco-global-properties $CATALINA_HOME/shared/classes/ ``` For AMP and JAR, i recommend that you download the binairies on the fly at docker-build ```docker ENV MD_PREVIEW_VERSION=1.3.0 ENV MANUAL_MANAGER_VERSION=1.0.0 WORKDIR /alfresco/tomcat/webapps/share/WEB-INF/lib/ RUN curl -L https://github.com/username_1/manual-manager/releases/download/v${MANUAL_MANAGER_VERSION}/loftux-manual-manager.jar \ -o loftux-manual-manager.jar WORKDIR /alfresco/tomcat/webapps/alfresco/WEB-INF/lib/ RUN curl -L https://github.com/username_1/manual-manager/releases/download/v${MANUAL_MANAGER_VERSION}/loftux-manual-manager.jar \ -o loftux-manual-manager.jar WORKDIR /alfresco/tomcat/webapps/share/WEB-INF/lib/ RUN curl -L https://github.com/username_1/manual-manager/releases/download/v${MANUAL_MANAGER_VERSION}/loftux-manual-manager.jar \ -o loftux-manual-manager.jar # Copy Markdown Preview Add-On. # https://github.com/cetra3/md-preview WORKDIR /alfresco/amps_share RUN curl -L https://github.com/username_1/md-preview/releases/download/${MD_PREVIEW_VERSION}/parashift-mdpreview-share-${MD_PREVIEW_VERSION}.amp \ -o parashift-mdpreview-share-${MD_PREVIEW_VERSION}.amp WORKDIR /alfresco/amps/ RUN curl -L https://github.com/username_1/md-preview/releases/download/${MD_PREVIEW_VERSION}/parashift-mdpreview-repo-${MD_PREVIEW_VERSION}.amp \ -o parashift-mdpreview-repo-${MD_PREVIEW_VERSION}.amp ```` But you can also use the `COPY` method if you put your binaries in the `assets` directory. <issue_comment>username_1: You migh use the command `docker-compose up --build` to start your instance. <issue_comment>username_0: Great username_1 day, Still can not send email or enable as smarts folder, is there anything wrong that I'm doing. See the files below. [docker-compose.yml](https://pastebin.com/FtNC0PM) Folder: repository [docker-entrypoint,sh](https://pastebin.com/CJLtvjGp) Grateful, Enio <issue_comment>username_1: Hello Enio, I pushed an update few days ago to support "SMART_FOLDERS_ENABLED=true" variable. I recommand that you pull the latest docker images, update your docker-compose and restart the run. ```bash docker-pull username_1/alfresco:repository docker-pull username_1/alfresco:share ``` <issue_comment>username_1: I'm currently working on way to add additionnal jar and amp on the fly at startup <issue_comment>username_0: Something I'm doing wrong, but not Alfresco is not sending email. You can help? See the files below. [docker-compose.yml](https://pastebin.com/FtNC0PMb) Folder: repository [docker-entrypoint,sh](https://pastebin.com/CJLtvjGp) search: environment: MAIL_HOST: smtp.gmail.com MAIL_PORT: "465" MAIL_PROTOCOL: smtp MAIL_ENCODING: UTF-8 MAIL_FROM_DEFAULT_ENABLED: 'true' MAIL_FROM_DEFAULT: <EMAIL> MAIL_SMTP_USERNAME: <EMAIL> MAIL_SMTP_PASSWORD: ****** MAIL_SMTP_AUTH: 'true' MAIL_SMTP_STARTTLS: 'true' MAIL_SMTPS_AUTH: 'true' MAIL_SMTPS_STARTTLS_ENABLE: 'true' MAIL_TESTMESSAGE_SEND: 'true' MAIL_TESTMESSAGE_TO: <EMAIL> MAIL_TESTMESSAGE_SUBJECT: Ref.:servidor Alfresco - ES Engenharia. MAIL_TESTMESSAGE_TEXT: O seu servidor Alfresco foi iniciado.
{'fraction_non_alphanumeric': 0.10113519091847266, 'fraction_numerical': 0.007223942208462332, 'mean_word_length': 3.9388535031847134, 'pattern_counts': {'":': 0, '<': 11, '<?xml version=': 0, '>': 11, 'https://': 10, 'lorem ipsum': 0, 'www.': 0, 'xml': 2}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 6, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '7769850', 'n_tokens_mistral': 1440, 'n_tokens_neox': 1328, 'n_words': 308}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: test/Makefile modification to build without docker-compose (on rhel8) username_0: The current instructions indicate to run `docker-compose` but it may not be installed on the host (as I didn't have it on a rhel8 VM where I tried to build). There may also be different versions of `clang` and `llc` (I had clang and LLVM 11.0.0). With the proposed Makefile modifications, I am able to build `test_program_x86_64` on a RHEL 8.4.2 VM with `make KERNEL_HEADERS_ROOT=/usr/src/kernels/$(uname -r)`. It still works as before for the "standard case" with `docker-compose` build. ``` $ make CC=clang LLC=llc OPT=opt LLVM_DIS=llvm-dis KERNEL_HEADERS_ROOT=/usr/src/kernels/$(uname -r) CC: clang KERNEL_HEADER_VERSION: 4.4.0-98-generic KERNEL_HEADERS_ROOT: /usr/src/kernels/4.18.0-305.19.1.el8_4.x86_64 clang -target x86_64 -D__KERNEL__ -D__BPF_TRACING__ -Wunused -Wall -Werror -Wno-pointer-sign -Wno-address-of-packed-member -Wno-compare-distinct-pointer-types -Wno-gnu-variable-sized-type-not-at-end -Wno-macro-redefined -Wno-sometimes-uninitialized -Wno-tautological-compare -fno-stack-protector -Xclang -disable-llvm-passes -O2 -D__ASM_SYSREG_H -D__TARGET_ARCH_x86 -emit-llvm -c test_program.c -I src/ -I /usr/src/kernels/4.18.0-305.19.1.el8_4.x86_64/arch/x86/include -I /usr/src/kernels/4.18.0-305.19.1.el8_4.x86_64/arch/x86/include/uapi -I /usr/src/kernels/4.18.0-305.19.1.el8_4.x86_64/arch/x86/include/generated -I /usr/src/kernels/4.18.0-305.19.1.el8_4.x86_64/arch/x86/include/generated/uapi -I /usr/src/kernels/4.18.0-305.19.1.el8_4.x86_64/include -I /usr/src/kernels/4.18.0-305.19.1.el8_4.x86_64/include/uapi -I /usr/src/kernels/4.18.0-305.19.1.el8_4.x86_64/include/generated -I /usr/src/kernels/4.18.0-305.19.1.el8_4.x86_64/include/generated/uapi -o - | \ opt -O2 -mtriple=bpf-pc-linux | llvm-dis | \ llc -march=bpf -filetype=obj -o test_program_x86_64 ``` `docker-compose` build on a Mac: ``` $ docker-compose run --rm test-builder Creating test_test-builder_run ... done CC: clang-6.0 KERNEL_HEADER_VERSION: 4.4.0-98-generic KERNEL_HEADERS_ROOT: /usr/src/linux-headers-4.4.0-98-generic rm -rf test_program_* clang-6.0 -target x86_64 -D__KERNEL__ -D__BPF_TRACING__ -Wunused -Wall -Werror -Wno-pointer-sign -Wno-address-of-packed-member -Wno-compare-distinct-pointer-types -Wno-gnu-variable-sized-type-not-at-end -Wno-macro-redefined -Wno-sometimes-uninitialized -Wno-tautological-compare -fno-stack-protector -Xclang -disable-llvm-passes -O2 -D__ASM_SYSREG_H -D__TARGET_ARCH_x86 -emit-llvm -c test_program.c -I src/ -I /usr/src/linux-headers-4.4.0-98-generic/arch/x86/include -I /usr/src/linux-headers-4.4.0-98-generic/arch/x86/include/uapi -I /usr/src/linux-headers-4.4.0-98-generic/arch/x86/include/generated -I /usr/src/linux-headers-4.4.0-98-generic/arch/x86/include/generated/uapi -I /usr/src/linux-headers-4.4.0-98-generic/include -I /usr/src/linux-headers-4.4.0-98-generic/include/uapi -I /usr/src/linux-headers-4.4.0-98-generic/include/generated -I /usr/src/linux-headers-4.4.0-98-generic/include/generated/uapi -o - | \ opt-6.0 -O2 -mtriple=bpf-pc-linux | llvm-dis-6.0 | \ llc-6.0 -march=bpf -filetype=obj -o test_program_x86_64 ``` The `llvm-objdump` output seems reasonable to me: https://gist.github.com/username_0/6b4152ed15e01e21ef8b84ba0fade99e To admit, I cannot yet run all the tests [1] - I don't know yet what causes the test failures. [1] https://gist.github.com/username_0/6ec20abefe56788626129d9810015151 <issue_comment>username_0: @RafaelOrtizRC you are right, after changing `/etc/security/limits.conf` and rebooting to raise the `memlock` limits, all the tests worked! ``` #<domain> <type> <item> <value> * - memlock unlimited * ```
{'fraction_non_alphanumeric': 0.1487603305785124, 'fraction_numerical': 0.0823780325246601, 'mean_word_length': 3.6493184634448577, 'pattern_counts': {'":': 0, '<': 7, '<?xml version=': 0, '>': 7, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '25813334', 'n_tokens_mistral': 1704, 'n_tokens_neox': 1566, 'n_words': 281}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: [DoctrineBridge] Cannot find entities inside Phar files username_0: If a Symfony 2 application is packaged inside a Phar file and Doctrine is configured to search for entities inside the Phar archive, the path will not be found. ```XML <doctrine:mapping name="MyBundle" dir="Entity" prefix="My\Bundle\Entity" type="annotation" /> ``` The problem is in the method `AbstractDoctrineExtension::setMappingDriverConfig` (src/Symfony/Bridge/Doctrine/DependencyInjection/AbstractDoctrineExtension.php) ```PHP if (is_dir($mappingConfig['dir'])) { $this->drivers[$mappingConfig['type']][$mappingConfig['prefix']] = realpath($mappingConfig['dir']); } else { throw new \InvalidArgumentException(sprintf('Invalid Doctrine mapping path given. Cannot load Doctrine mapping/bundle named "%s".', $mappingName)); } ``` `is_dir` will return `true`, while `realpath` will return `false`, causing the application to fail later on with the exception: ``` [Doctrine\Common\Persistence\Mapping\MappingException] File mapping drivers must have a valid directory path, however the given path seems to be incorrect! ```<issue_closed>
{'fraction_non_alphanumeric': 0.11365564037319763, 'fraction_numerical': 0.0016963528413910093, 'mean_word_length': 5.742857142857143, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '20845451', 'n_tokens_mistral': 333, 'n_tokens_neox': 316, 'n_words': 112}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Apps.ListRepos/Apps.ListUserRepos: Add mercy-preview + nebula-preview headers username_0: **What type of PR is this?** feature **What this PR does / why we need it**: * mercy-preview: For receiving repository topics * nebula-preview: For visibility information When calling `Apps.ListRepos` or `Apps.ListUserRepos`, the field `visibility` and `topic` are always `nil`. The reason: Missing headers for developers preview API. **Special notes for your reviewer**: This applies the same fix as in https://github.com/google/go-github/pull/1780 **Does this PR introduce a user-facing change?**: None **Additional documentation e.g., usage docs, etc.**: - [API changes to support internal repository visibility (2019-12-03)](https://developer.github.com/changes/2019-12-03-internal-visibility-changes/#repository-visibility-fields) <issue_comment>username_1: Never mind my last comment. :joy: See edit above. <issue_comment>username_0: @username_1 @wesleimp Good to merge? Anything I can do to support? <issue_comment>username_1: Thank you, @wesleimp ! Merging.
{'fraction_non_alphanumeric': 0.10913930789707187, 'fraction_numerical': 0.022182786157941437, 'mean_word_length': 4.611940298507463, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '965615', 'n_tokens_mistral': 372, 'n_tokens_neox': 326, 'n_words': 114}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Different groups of the same topic name in one instance will cause routing error username_0: How to reproduce : ```cs [CapSubscribe("key")] public void method(){ } [CapSubscribe("key", Group="group2")] public void method2(){ } ``` `method` and `method2` are in the same instance. If the publisher publish topic with the same key more than once , it may be caused during the process of consumer routing to subscribe method.<issue_closed> <issue_comment>username_0: Fixed in v2.4.0
{'fraction_non_alphanumeric': 0.09550561797752809, 'fraction_numerical': 0.0149812734082397, 'mean_word_length': 4.631578947368421, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '17761379', 'n_tokens_mistral': 165, 'n_tokens_neox': 149, 'n_words': 66}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Fix verbiage on environment validation error username_0: ### Describe the Bug The [current verbiage](https://github.com/appcelerator/vscode-appcelerator-titanium/blob/71d51fbc270c41a39e80f68e4db6f5f3627de984/src/extension.ts#L371) on the error dialog on environment validation does not make sense, it should be updated to `You are missing the following required components for Titanium development:` ### Steps to reproduce the Bug Steps to reproduce the behavior, e.g.: 1. Uninstall the appcelerator npm package `npm uninstall appcelerator -g` 2. Open the extension ### Expected Behavior The error dialog should have good verbiage that makes sense ### Your Environment<issue_closed>
{'fraction_non_alphanumeric': 0.06995884773662552, 'fraction_numerical': 0.04252400548696845, 'mean_word_length': 5.517857142857143, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '10849911', 'n_tokens_mistral': 234, 'n_tokens_neox': 213, 'n_words': 79}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Reduce visibility of DirectRunner classes username_0: Be sure to do all of the following to help us incorporate your contribution quickly and easily: - [ ] Make sure the PR title is formatted like: `[BEAM-<Jira issue #>] Description of pull request` - [ ] Make sure tests pass via `mvn clean verify`. - [ ] Replace `<Jira issue #>` in the title with the actual Jira issue number, if there is one. - [ ] If this contribution is large, please file an Apache [Individual Contributor License Agreement](https://www.apache.org/licenses/icla.pdf). --- Move inner classes of the DirectRunner to reduce total API Surface. <issue_comment>username_0: R: @username_1 @dhalperi This is the easy part of #2714 <issue_comment>username_0: retest this please <issue_comment>username_1: LGTM, and Jenkins has already processed all affected modules.
{'fraction_non_alphanumeric': 0.07845303867403315, 'fraction_numerical': 0.009944751381215469, 'mean_word_length': 3.8972972972972975, 'pattern_counts': {'":': 0, '<': 7, '<?xml version=': 0, '>': 7, 'https://': 1, 'lorem ipsum': 0, 'www.': 1, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '13531283', 'n_tokens_mistral': 266, 'n_tokens_neox': 246, 'n_words': 111}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Fix path for httpd binary username_0: In habitat 0.9.3, the plan build script no longer uses `$pkg_prefix` in front of the command specified by the `pkg_svc_run` because as long as the "bin" directory is specified in `pkg_bin_dirs`, the program for the service will be in the `$PATH` environment variable. This means we also do not need to specify it in the `$pkg_svc_run` variable in our plans. And in fact, the `$pkg_prefix` variable itself isn't populated at the time that a plan is sourced by the build script, so this will be an empty value, and result in the `run` script getting rendered as, e.g., `/bin/httpd`. <issue_comment>username_1: @username_0, thanks for your PR! By analyzing the annotation information on this pull request, we identified @i-agrawal, @juliandunn, @stevendanna, @fnichol and @smith to be potential reviewers
{'fraction_non_alphanumeric': 0.06971428571428571, 'fraction_numerical': 0.006857142857142857, 'mean_word_length': 5.083333333333333, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '4766686', 'n_tokens_mistral': 262, 'n_tokens_neox': 253, 'n_words': 133}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Your instructions for Windows is failing on windows 7 username_0: Your instructions for Windows is failing on windows 7 ``` PS C:\Users\Fire> . { iwr -useb https://omnitruck.chef.io/install.ps1 } | iex; install -channel current -project chefd The term 'iwr' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelli g of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:8 + . { iwr <<<< -useb https://omnitruck.chef.io/install.ps1 } | iex; install -channel current -project chefdk + CategoryInfo : ObjectNotFound: (iwr:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException install: unknown option -- h Try '/usr/bin/install --help' for more information. PS C:\Users\Fire> ``` <issue_comment>username_1: I think that InvokeWebRequest was introduced in PowerShell 3, and Windows 7 has by default PowerShell 2. Maybe README.md needs a note about the version requirement that in the Pre-Release installation section. I upgraded PowerShell (to version 4, iirc) on a Windows 7 machine and the command runs fine. <issue_comment>username_2: I can't seem to run down a piece of documentation regarding this but Powershell > v 3 is required. We should definitely have this documented somewhere (more obvious).<issue_closed> <issue_comment>username_3: I'm going to close this out at this point since all platforms we support have PS3
{'fraction_non_alphanumeric': 0.07477243172951886, 'fraction_numerical': 0.011053315994798439, 'mean_word_length': 4.013029315960912, 'pattern_counts': {'":': 0, '<': 10, '<?xml version=': 0, '>': 9, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '14990246', 'n_tokens_mistral': 450, 'n_tokens_neox': 416, 'n_words': 196}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Fix typo in instructions username_0: In the ENS Dapp for permanent register in the help section '' learn how to manage your name'' it says, 3. Set Reverse Record Once you set your address, you will see a message saying the reverse pointer is not set. Click the arrow in the field to expand, then click the button ‘point’. This will translate your address into your new name. Shubhaankar @Shubhaankar 12:15 ed<NAME> @Edwinvangent_twitter 12:16 I think click the button ''point'' is wrong and confusing because there is no button with point, it should say click the button ''SAVE'' gooddluck<issue_closed>
{'fraction_non_alphanumeric': 0.055900621118012424, 'fraction_numerical': 0.015527950310559006, 'mean_word_length': 4.863636363636363, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '21121305', 'n_tokens_mistral': 180, 'n_tokens_neox': 170, 'n_words': 99}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Review and make an analysis about what is failing within "Filter by value" analysis form username_0: There are several bugs described in [this milestone](https://github.com/CartoDB/cartodb/milestone/142), but we should check if the current implementation could be valid and good for the future or we should redone it. In my opinion, we should, at least, add those necessary tests we don't have right now. Apart from that, fix those ugly JS bugs we have in some scenenarios. But in any case, a small research and analysis about what should be changed is necessary. cc @username_2 @nobuti <issue_comment>username_1: There are also some features missing such as getting the TOP / BOTTOM X results based on a numeric column. <issue_comment>username_2: I added the missing tests for filter by value in https://github.com/CartoDB/cartodb/pull/12199, but as the implementation is likely to change, they would probably change too `¯\_(ツ)_/¯` <issue_comment>username_3: Stale issue. Closing 👋<issue_closed>
{'fraction_non_alphanumeric': 0.06340057636887608, 'fraction_numerical': 0.012487992315081652, 'mean_word_length': 4.853932584269663, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '24014740', 'n_tokens_mistral': 290, 'n_tokens_neox': 273, 'n_words': 145}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: 🛑 Nino is down username_0: In [`9a05fe4`](https://github.com/erpcya/status/commit/9a05fe4d09ddc75005bb9e3423f9de11521fd5ec ), Nino ($URL_15) was **down**: - HTTP code: 0 - Response time: 0 ms <issue_comment>username_0: **Resolved:** Nino is back up in [`b66bb50`](https://github.com/erpcya/status/commit/b66bb500c759f9ec01aabc24b9ca0e7f99843387 ).<issue_closed>
{'fraction_non_alphanumeric': 0.14898989898989898, 'fraction_numerical': 0.15404040404040403, 'mean_word_length': 7.270833333333333, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '10036275', 'n_tokens_mistral': 200, 'n_tokens_neox': 168, 'n_words': 28}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: cookie username_0: how to increase cookie size <issue_comment>username_1: This library doesn't restrict cookie size. The cookie size may be limited to 4K by the browser (see https://stackoverflow.com/questions/640938/what-is-the-maximum-size-of-a-web-browsers-cookies-key).<issue_closed>
{'fraction_non_alphanumeric': 0.10248447204968944, 'fraction_numerical': 0.027950310559006212, 'mean_word_length': 6.021739130434782, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '5229435', 'n_tokens_mistral': 105, 'n_tokens_neox': 97, 'n_words': 28}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: AttributeError: 'ResNet' object has no attribute 'isRefine' username_0: When I tried to run the demo code with the pretrained model, I got this error "AttributeError: 'ResNet' object has no attribute 'isRefine' ". I am a newbie and I don't know how to solve it. Is there anyone who knows how to solve this problem? Thanks! ![Screenshot from 2019-07-15 15-23-12](https://user-images.githubusercontent.com/30254494/61200782-28130e00-a715-11e9-8db2-5abdb30d210a.png) <issue_comment>username_1: We have fixed this error. And sorry for that. <issue_comment>username_2: I have the same problem and how to solve it @username_1 <issue_comment>username_2: I know how to solve thanks <issue_comment>username_3: same problem here. what is the answer, guys?<issue_closed>
{'fraction_non_alphanumeric': 0.08406524466750313, 'fraction_numerical': 0.07151819322459223, 'mean_word_length': 5.487804878048781, 'pattern_counts': {'":': 0, '<': 7, '<?xml version=': 0, '>': 7, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '24750946', 'n_tokens_mistral': 283, 'n_tokens_neox': 252, 'n_words': 97}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Check the `[]dependency` slice length to avoid panic username_0: ### What does this do / why do we need it? It checks if a slice is nil or not and we need because the code will panic without it. ### Do you need help or clarification on anything? No ### Which issue(s) does this PR fix? fixes #2194 <issue_comment>username_1: Dep was officially deprecated earlier this year, and the proposal to archive this repository [was accepted](https://github.com/golang/go/issues/38158). As such, I'm closing outstanding issues before archiving the repository. For any further comments, please use the proposal thread on the Go issue tracker. Thanks!
{'fraction_non_alphanumeric': 0.07320644216691069, 'fraction_numerical': 0.016105417276720352, 'mean_word_length': 4.60655737704918, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '6864866', 'n_tokens_mistral': 188, 'n_tokens_neox': 173, 'n_words': 97}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: create some metrics for evaluating AutoML results username_0: I was thinking of 2 things: 1) Write out the `LoopImageSet.results` list at the end of each `collectLoopImages` operation. 2) Create a small graphs showing `loopWidth` and `score` as a function of image `index` <issue_comment>username_0: If I want to add some functions to `loopDHS.py` e.g. ``` def write_results(): """Writes out current contents of the jpeg list""" timestr = time.strftime("%Y%m%d-%H%M%S") fn = ''.join(['results_',timestr,'.txt']) results_file = os.path.join(context.state['jpeg_save_dir'],fn) with open(results_file, 'w') as f: for item in context.jpegs.results: f.write('%s\n' % item) ``` or ``` def plot_results(): """Makes a simple plot of image index vs loopWidth""" i = [e[1] for e in context.jpegs.results] _logger.spam(f'PLOT INDICES: {i}') loopWidths = [e[7] for e in context.jpegs.results] _logger.spam(f'PLOT LOOP WIDTHS: {loopWidths}') plt.plot(i,loopWidths) plt.xlabel('image index') plt.ylabel('loop width') plt.title(' '.join(['loopWidth',timestr])) fn = ''.join(['plot_loop_widths_',timestr,'.png']) results_plot = os.path.join(context.state['jpeg_save_dir'],fn) plt.savefig(results_plot) ``` how do I make it so they have access to `context` ?<issue_closed>
{'fraction_non_alphanumeric': 0.14476458186929023, 'fraction_numerical': 0.004919184820801124, 'mean_word_length': 3.535031847133758, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '9905880', 'n_tokens_mistral': 515, 'n_tokens_neox': 490, 'n_words': 136}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: [bug]physical external interface mapping always get tagged interface setting username_0: Set physical mapping as : f5_external_physical_mappings = default:1.1:False,vlannet:1.2:False Found the vlan created for "vlannet" always be tagged interface. This was found when test openstack lbaas v1. Found the issue was caused by permanent true logic in the below code. After checking V2 code, seem like same logic issue. ` def _assure_device_network_vlan(self, network, bigip, network_folder): # Ensure bigip has configured tagged vlan # VLAN names are limited to 64 characters including # the folder name, so we name them foolish things. vlan_name = "" interface = self.interface_mapping['default'] tagged = self.tagging_mapping['default'] # Do we have host specific mappings? net_key = network['provider:physical_network'] if net_key and net_key + ':' + bigip.hostname in \ self.interface_mapping: interface = self.interface_mapping[ net_key + ':' + bigip.hostname] tagged = self.tagging_mapping[ net_key + ':' + bigip.hostname] # Do we have a mapping for this network elif net_key and net_key in self.interface_mapping: interface = self.interface_mapping[net_key] tagged = self.tagging_mapping[net_key] if tagged: <<<<<<<<<<<!!!!!!!! vlanid = network['provider:segmentation_id'] else: vlanid = 0`
{'fraction_non_alphanumeric': 0.08045254556882464, 'fraction_numerical': 0.0069138906348208675, 'mean_word_length': 2.537777777777778, 'pattern_counts': {'":': 0, '<': 13, '<?xml version=': 0, '>': 2, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '25149663', 'n_tokens_mistral': 451, 'n_tokens_neox': 419, 'n_words': 138}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Allow to skip waiting for release after submit packages (with backwards compatibility). username_0: To both tasks added new parameter "waiting" ` { "name": "waiting", "type": "boolean", "label": "Waiting to Release", "defaultValue": true, "required": true, "helpMarkDown": "It may take several hours." } ` It have default value "true", so by default it is backwards compatibility. But user can uncheck this option to skip waiting release. <issue_comment>username_1: Hi, I am closing and re-opening this PR to bump the CLA bot. Sorry for the inconvenience! <issue_comment>username_2: @username_0 This functionality has been added in the extension now. You can check 'Skip Polling' checkbox to avoid waiting for submission to get published. Your efforts to enhance the extension are really appreciated! Keep the feedback coming! :-) <issue_comment>username_0: Thank you!
{'fraction_non_alphanumeric': 0.07584830339321358, 'fraction_numerical': 0.00499001996007984, 'mean_word_length': 3.3047210300429186, 'pattern_counts': {'":': 6, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '10151691', 'n_tokens_mistral': 248, 'n_tokens_neox': 241, 'n_words': 120}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: How to use display names? username_0: Would you pls help as how to use display names? <issue_comment>username_1: @username_2 I also have the same question, about using display names and context in suggestions. Kindly share some examples/process through we can implement that. <issue_comment>username_2: Acura ```<issue_closed>
{'fraction_non_alphanumeric': 0.06629834254143646, 'fraction_numerical': 0.011049723756906077, 'mean_word_length': 6.408163265306122, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '799675', 'n_tokens_mistral': 100, 'n_tokens_neox': 98, 'n_words': 45}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: service_unix - error text is not informative username_0: Hello. If the command, example `start`, failed, in output writed only error code, example: ` "systemctl" failed: exit status 5` - it is not informative. On my opinion, in function: [service_unix:55 func run](https://github.com/kardianos/service/blob/master/service_unix.go#L55), if you want to process stderr buffer, better to use `bytes.Buffer` instead of `StderrPipe`: ```go var buf bytes.Buffer cmd.Stderr = &buf [...] if command == "launchctl" { if buf.Len() > 0 { return fmt.Errorf("%q failed with stderr: %s", command, buf.String()) } } if err := cmd.Wait(); err != nil { return fmt.Errorf("%q failed: %s", command, buf.String()) } ``` In the error will write a valid output from the service, example for systemctl: `Failed to start name.service: Unit name.service not found.`
{'fraction_non_alphanumeric': 0.1328125, 'fraction_numerical': 0.0078125, 'mean_word_length': 4.215116279069767, 'pattern_counts': {'":': 0, '<': 2, '<?xml version=': 0, '>': 3, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '27737308', 'n_tokens_mistral': 291, 'n_tokens_neox': 282, 'n_words': 106}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Adding Activity and its Tests username_0: <issue_comment>username_1: Hey Anthony, not sure if you've already seen, but there are some compilation errors in Activity.java pertaining to the DateTimeFortmatter. Not sure if it's compiling for you, but I think the Travis tests are complaining about it.
{'fraction_non_alphanumeric': 0.05089820359281437, 'fraction_numerical': 0.005988023952095809, 'mean_word_length': 5.836734693877551, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '26752644', 'n_tokens_mistral': 89, 'n_tokens_neox': 80, 'n_words': 45}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Lighting Update Implementation username_0: I am trying to come up with a new implementation for MCEdit's lighting update that is fast enough to be processed immediately. I'll begin with a short summary of the in-game lighting algorithm, and follow by describing my past implementations. # Algorithm The behavior of light is simple enough to intuit: Blocks such as torches "glow" with their own light level. Block cells adjacent to a glowing block have a light level one less than that, and cells adjacent to those have a light one less than that, and so on until the light level drops to zero. Solid blocks such as stone are completely opaque and drop the light level to zero immediately. Others such as ice and water are partially opaque and drop the light by more than one level. Lighting must be updated whenever a block is changed, and we can consider only a few general cases: The simplest case is when a block is changed into a light source, or when a light source is changed to a brighter source. First, set that cell's light value to the light source value. Then, compare that cell with the six cells adjacent. For each adjacent cell, subtract its opacity value (which is always at least 1) from the current cell's light, and then set the adjacent cell's light to the new value. If the adjacent cell's light value has changed, repeat the process with the adjacent cell as the new current cell. In pseudocode: (For brevity, only the x coordinate is used and only the x-1 neighbor is checked. In real code, we need to check x-1, x+1, y-1, y+1, z-1, z+1) ``` def blockChanged(x): light = blockBrightness(x) setLight(x, light) spreadLight(x) def spreadLight(x): light = getLight(x) if light <= 0: return # Repeat for all six adjacent cells adjacentLight = getLight(x-1) adjacentOpacity = blockOpacity(x-1) # If the adjacent cell already has the "correct" light value, stop. if light - adjacentOpacity > adjacentLight: setLight(x-1, light - adjacentOpacity) spreadLight(x-1) # End repeat ``` A variant of the first case is when a block is changed to another block with a lesser opacity value. For example, replacing ice with air. In this case, instead of replacing this cell's light value with the new block's light value (as it doesn't have one), we will recompute this cell's light value from its brightest neighbor. The cell's light is set to its brightest neighbor's light minus its own opacity, and then the light is updated as before: ``` def blockChanged(x): previousLight = getLight(x) light = blockBrightness(x) setLight(x, light) opacity = blockOpacity(x) # Repeat for all six adjacent cells adjacentLight = getLight(x-1) if adjacentLight - opacity > getLight(x): setLight(x, adjacentLight - opacity) # End repeat if previousLight != getLight(x): spreadLight(x) ``` The last case is when a block is changed to a dimmer light source. This also includes the cases where a block is changed to one with a greater opacity value. Both cases have the same result: the cell's light value is decreased. To handle this case correctly, we must first find every block that was previously illuminated by the cell's old light value and reset its light using its neighbor's light as in the variant of the first case. Then, each of those cells that was reset to a nonzero light will cast its light into adjacent cells using spreadLight(). Since all of the cells touched by the previous light source still have their old light values, we must do this in several passes. First, find all cells touched by the previous light source, set their light values to zero, and store their positions. Second, set the light value of every cell found to the light value of its brightest neighbor minus its own opacity. Third, for every cell whose light changed in the second step, cast its light using spreadLight(). This search will find the cells adjacent to the changed block and has the effect of spreading the changed block's new light value. This is the complete algorithm: ``` def blockChanged(x): previousLight = getLight(x) light = blockBrightness(x) setLight(x, light) drawLight(x) if previousLight > getLight(x): spreadLight(x) if previousLight < getLight(x): fadeLight(x, previousLight) [Truncated] # End Repeat return fadedCells ``` # MCEdit 1.0 Implementation MCEdit 1.0 used a completely different approach to updating the light values after blocks are changed. Instead of dealing with individual cells, the implementation updates light values across the entire chunk. First the entire chunk is reset to the brightness values of each block. Numpy's array slicing is used to get the entire chunk's light without its right edge, as `left`. Then, the entire chunk's opacity values without its left edge, as `right`. Then, `left - right` gives the new light values for all the blocks in `right`. This is repeated for all six directions, and for 15 iterations. Temporary arrays are used to hold the left and right edges of the adjacent chunks and detect whether the changed light has spilled into those chunks so they can be updated likewise. When a single block is changed, its chunk is marked for lighting updates, and since a light change has effects up to 15 blocks away in all directions, the eight adjacent chunks are also marked. This approach has the obvious drawback of updating far more cells than is strictly necessary. However, it also has the benefit of not dealing with block coordinates at all. By doing this, it also sidesteps the inefficiency of checking a single cell for updates from multiple directions. (Imagine the light from one block spreading up and then left, and that same light also spreading left and then up to touch the other block twice.) I haven't bothered to estimate it, but for large numbers of simultaneous block changes, there has to be a break-even point at which the chunk-by-chunk method is more time-efficient (and space-efficient!) than the block-by-block method. To give you an idea of where it is, placing a single glowstone block in the middle of a large enough dark cavern will change the light values of ~15,000 cells; a Minecraft Alpha chunk (at the time the algorithm was written) has 32,768 cells. # Array-based setBlocks implementation MCEdit 2.0 has a partial, and very broken implementation that builds upon the new `setBlocks` function, which allows any number of blocks (and light values, biomes, etc) to be changed in a single call. It is roughly equivalent to the first block-by-block except that the individual x, y, z coordinates are arrays of coordinates instead. Even though it is incomplete - it only implements the first case of "block light increased" - it already reveals some shortcomings. The biggest one being that there is no elimination of duplicate coordinates. (going to bed - will finish later) <issue_comment>username_1: It's a bit hard to grasp it all after first reading through it and know if I'm completely in the dark with my ideas, but here goes. So to me it seems that increasing the light value, ie. spreading new light is much easier and straight forward? And the removing or dimming of light sources is the more complicated case? Bases on that my first thought (which I'm not sure if it is already what is happening) is something like this: When a light source ir removed (or dimmed), first decrease the light value of all the blocks in a diamond-ish shape around it (or just a box?), by the amount that would possibly have been received from that light source if completely non obstructed (ie. Air; so basically decrease the light value by the removed source's light value minus the block distance from it, or even straight to zero?). Then re-spread the light (only needs to be done "inwards" towards the area?) from all the light sources that are in a 15-ish block range from the edges of that dimmed diamond-shaped area (or box) that was affected. Not sure if this makes any sense or would be cheaper than the existing algorithms. My thinking was that it might be less complex than trying to find out the blocks that actually got affected, just reset everything to zero in the range possibly affected and re-spread from the surroundings and the affectedarea's remaining light sources. <issue_comment>username_2: I've thought about light propagation before, and since you want ideas no matter how far-fetched, I'll share what I've thought about that seems relatively OK. You might be able to use something. (This will be pseudocode in a C#/Java like style since that's what I'm used to.) I always saw MineCraft light propagation as a recursive problem (which is strange, given my opinion on recursion -- that it's rarely applicable). In essence, each block would be responsible for notifying it's neighbors when its light propagation changed. Thus, when a block's light value changed by the player: ``` void setLightLevel(int value) { if (value == this.lightLevel) return; //else bool increased = value > this.lightLevel; this.lightLevel == value; foreach (BlockPos pos in this.pos.neighbors) { pos.block.setLightLevel(this.pos, this.lightLevel, increased); } } ``` Thus, the actual block's change is quick to process and notifies the neighboring blocks that they need to update their light via setLightLevel(BlockPos, int, bool). This method is in charge of handling propagating light updates away from the source of the update and terminating them if no change occurs. Thus: ``` void setLightLevel(BlockPos source, int sourceValue, bool increasing) { if (increasing && sourceValue <= this.lightLevel) return; if (!increasing && sourceValue >= this.lightLevel) return; //else this.lightLevel == sourceValue - opacity; foreach (BlockPos pos in this.pos.neighbors) { if (pos != source) { pos.block.setLightLevel(this.pos, this.lightLevel, increased); } } } ``` Disclaimer: I've not done any testing on this algorithm, just speculation, and may be just rehashing things you've already considered. I just wanted to share my $0.02. In addition, I have no idea how this stands up to the large use case that MCEdit is concerned with, as when I came up with this I was only thinking about handling in-game cases (before /fill). ------------------------------------------------------------------------------------------------------------------------ From the more theoretical side, I think I agree that the light increase algorithm is by nature 'cheaper' then the light decrease. In the case of losing light sources it _might_ be more efficient to just black-out 15 blocks away from any modified blocks and just propagate the light back in. Thinking shortly about it, it's worth considering an algorithm similar to this: * Black out changed blocks + 15 blocks away (unless light source) * Propagate light away from light sources within blacked out area * Propagate light in from the edges of blacked out area Lastly, I'm ignoring sunlight because that's done on the fly by the engine, right? <issue_comment>username_0: @username_1: That's a good point, but I don't see it as much of an improvement. You're asking if you can skip the opacity check when searching for blocks to dim, and finding all of the cells that the light would have touched if none of the blocks were opaque. In exchange for skipping the opacity check, you're now calling drawLight and spreadLight on a whole bunch of cells that aren't actually touched by the changed block's previous opacity. However, you did remind me of an important flaw in my findFadedCells function. I was setting all of the cells to zero, when what I should do is set the cells to the block's brightness value instead, so it can re-propagate those light sources (for instance, if that light source was washed out by a brighter light source, which is now being removed). I'll edit that function appropriately. <issue_comment>username_0: @username_2 Thanks for the input. It's simple enough to mentally revise your example with the condition that there isn't actually a `Block` class where every section has 4,096 instances of `Block`. MCEdit only operates on the numeric values in the `Blocks` and `Light` arrays. I'll assume that your `this.lightLevel = foo` is a no-op for blocks that emit their own light, so as not to make the same mistake I did just now. I agree, it seems like it's possible to dim the block's light in a single pass, but your dimming approach will halt when it reaches the edge of the new light value's "light diamond" and not go on to draw in light from the outer edges of the old light value's diamond, nor update any values outside of the new light diamond. This is not necessarily a recursive problem. It can be considered as iterations of a function whose input is a list of coordinates where updates will be applied (plus the world state), and whose output is a list of coordinates where updates should be applied next (plus the world state). The MCEdit 2.0 implementation is roughly like this, calling a function repeatedly until the coordinate list it returns is empty. Sunlight is handled by a simple pre-processing step that checks if the column's height changed and calling spreadLight or fadeLight as needed. Essentially, cells above the column's height are treated as light-emitting cells. <issue_comment>username_0: @username_2 to your edit: You seem to be suggesting the same thing @username_1 did, skipping the opacity check in exchange for updating several cells that do not even need it. Thinking of a common case of a light source placed against a wall indoors, I'd expect that skipping the opacity check would update almost twice as many cells as is needed, if not more. <issue_comment>username_0: I've implemented the base algorithm in pure python to use as a point of reference. I had to fix a mistake in `blockChanged` where the `<` and `>` were swapped, and changed `findFadedCells` to use a queue for the cells to check and a set for the cells already found, removing the recursive call, as it was spending far too much time catenating lists with duplicate entries. These changes are reflected in the opening post. <issue_comment>username_0: I've implemented the base algorithm in Cython. It uses a `RelightCtx` class to wrap the WorldEditor and provide fast, cached, C++-level access to chunk section arrays while still holding references to the WorldEditor's chunks to keep them alive. It also uses STL containers in place of Python collection objects wherever possible. It is at least 50 times faster than the pure python algorithm. I found that updateLights implicitly assumes that the cell's opacity has changed (as it doesn't have the cell's previous block ID), so I modified `copyBlocks()` with a couple of strategies for calling updateLights only on the cells whose opacity or brightness changed. The results and some further ponderings are in [`time_relight_manmade.py`](https://github.com/mcedit/mcedit2/blob/master/src/mceditlib/bench/time_relight_manmade.py) In summary, updating lights across the entire copied area is faster than updating lights for each section after copying that section, because later copies will redo some of the work done by earlier light updates. Thus, a strategy for only relighting copied chunks after all adjacent chunks are copied will be needed, along with a strategy for managing the memory used for the block coordinates that need updates. <issue_comment>username_0: Build 649 has a more-or-less complete lighting implementation. Light will update immediately after each edit, rather than all at once when the world is saved. It is not as fast as I want it to be, though. I can tell that there is a lot of duplicate and wasted work done while updating the skylight after the height of a column changes.
{'fraction_non_alphanumeric': 0.05238685678859269, 'fraction_numerical': 0.004215747055176689, 'mean_word_length': 3.833982619118969, 'pattern_counts': {'":': 0, '<': 14, '<?xml version=': 0, '>': 16, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '22162236', 'n_tokens_mistral': 4019, 'n_tokens_neox': 3738, 'n_words': 2472}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Add public data files that belong to projects username_0: <!--- Provide a general description of your issue in the Title above --> ## General Checkups - [x] Have you checked that there isn't already an existing issue that describes what you report below? - [x] Have you checked that there isn't already an open pull requests for this issue/update/change? <!--- by the way: if you want to discuss feature requests and bugs with us before submitting anything on GitHub: We're always happy to chat on Slack: http://slackin.openhumans.org/ --> ## Description This comes out of a discussion with people running projects and @username_1 and I discussed it this morning (actually their idea, not mine. I just wanted to have it as an issue). People running projects on Open Humans sometimes want to provide some additional data files to all of their participants and other interested parties through Open Humans. These files are not associated with individual people, but rather are additional files that can be used to analyze the data sets that people contribute to the project. The solution we settled with: Projects leads that run a project on Open Humans should be able to upload additional data files that will then be associated with their project. These files will be publicly available from the project homepage. Todo: 1. Add file-model that belongs to a given project 2. views for upload/delete these files for the project lead 3. add the files/download links to the project-detail-view Open Questions: 1. do we want to integrate this into the API too? ## Related Issue(s) <!--- are there other already open issues that relate to this one and that interfere with this issue? In that case make sure to mention it like "#4" --> ## Example E.g. for my Twarxiv project I might want to provide a list of the "most common hashtags around the globe" or "background frequency of emoji use world-wide" so that people can use these data sets to further dive into their own Twitter archives. If I could upload this list to my Open Humans project I could easily link to these files and they would be visible to everyone checking out the project on our website. <!--- The description above should give a general example of bug you encountered or what feature you would like. This is the place to give a verbose example of what went wrong or what you would like to see improved and how the user-flow would be and how it improves things --> <issue_comment>username_1: Add a note: this feature could also be used by projects to share stuff like documentaiton of IRB approval, see: https://discourse.openhumans.org/t/approval-for-resilience-project-study/35
{'fraction_non_alphanumeric': 0.04657132379904657, 'fraction_numerical': 0.003667033370003667, 'mean_word_length': 3.9330922242314648, 'pattern_counts': {'":': 0, '<': 7, '<?xml version=': 0, '>': 7, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 1, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '2980990', 'n_tokens_mistral': 676, 'n_tokens_neox': 633, 'n_words': 427}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: includes HR in the trace username_0: Hi, I added the HR value to the new exported. <issue_comment>username_1: Hi Finally took some time to look into this. It seems that other apps are adding HR data to the GPX file in a different way, so I have reimplemented your commit following what I saw. Pull and test commit 4cdf648a2d383a8d82c83ae5e6957cecf927d4db! <issue_comment>username_0: Hi, The patch does not work for me. First I have an error in the round(1) function, which does not seem to exist: ``` ./polar_training2gpx:60:in `block (8 levels) in output_gpx': undefined method `round' for nil:NilClass (NoMethodError) from /home/leonardo/.rvm/gems/ruby-2.2.1/gems/nokogiri-1.8.0/lib/nokogiri/xml/builder.rb:391:in `call' from /home/leonardo/.rvm/gems/ruby-2.2.1/gems/nokogiri-1.8.0/lib/nokogiri/xml/builder.rb:391:in `insert' from /home/leonardo/.rvm/gems/ruby-2.2.1/gems/nokogiri-1.8.0/lib/nokogiri/xml/builder.rb:375:in `method_missing' from ./polar_training2gpx:59:in `block (7 levels) in output_gpx' from /home/leonardo/.rvm/gems/ruby-2.2.1/gems/nokogiri-1.8.0/lib/nokogiri/xml/builder.rb:391:in `call' from /home/leonardo/.rvm/gems/ruby-2.2.1/gems/nokogiri-1.8.0/lib/nokogiri/xml/builder.rb:391:in `insert' from /home/leonardo/.rvm/gems/ruby-2.2.1/gems/nokogiri-1.8.0/lib/nokogiri/xml/builder.rb:375:in `method_missing' from ./polar_training2gpx:58:in `block (6 levels) in output_gpx' from /home/leonardo/.rvm/gems/ruby-2.2.1/gems/nokogiri-1.8.0/lib/nokogiri/xml/builder.rb:391:in `call' from /home/leonardo/.rvm/gems/ruby-2.2.1/gems/nokogiri-1.8.0/lib/nokogiri/xml/builder.rb:391:in `insert' from /home/leonardo/.rvm/gems/ruby-2.2.1/gems/nokogiri-1.8.0/lib/nokogiri/xml/builder.rb:375:in `method_missing' from ./polar_training2gpx:55:in `block (5 levels) in output_gpx' from ./polar_training2gpx:54:in `each' from ./polar_training2gpx:54:in `block (4 levels) in output_gpx' from /home/leonardo/.rvm/gems/ruby-2.2.1/gems/nokogiri-1.8.0/lib/nokogiri/xml/builder.rb:391:in `call' from /home/leonardo/.rvm/gems/ruby-2.2.1/gems/nokogiri-1.8.0/lib/nokogiri/xml/builder.rb:391:in `insert' from /home/leonardo/.rvm/gems/ruby-2.2.1/gems/nokogiri-1.8.0/lib/nokogiri/xml/builder.rb:375:in `method_missing' from ./polar_training2gpx:52:in `block (3 levels) in output_gpx' from /home/leonardo/.rvm/gems/ruby-2.2.1/gems/nokogiri-1.8.0/lib/nokogiri/xml/builder.rb:391:in `call' from /home/leonardo/.rvm/gems/ruby-2.2.1/gems/nokogiri-1.8.0/lib/nokogiri/xml/builder.rb:391:in `insert' from /home/leonardo/.rvm/gems/ruby-2.2.1/gems/nokogiri-1.8.0/lib/nokogiri/xml/builder.rb:375:in `method_missing' from ./polar_training2gpx:51:in `block (2 levels) in output_gpx' from /home/leonardo/.rvm/gems/ruby-2.2.1/gems/nokogiri-1.8.0/lib/nokogiri/xml/builder.rb:391:in `call' from /home/leonardo/.rvm/gems/ruby-2.2.1/gems/nokogiri-1.8.0/lib/nokogiri/xml/builder.rb:391:in `insert' from /home/leonardo/.rvm/gems/ruby-2.2.1/gems/nokogiri-1.8.0/lib/nokogiri/xml/builder.rb:375:in `method_missing' from ./polar_training2gpx:40:in `block in output_gpx' from /home/leonardo/.rvm/gems/ruby-2.2.1/gems/nokogiri-1.8.0/lib/nokogiri/xml/builder.rb:293:in `initialize' from ./polar_training2gpx:39:in `new' from ./polar_training2gpx:39:in `output_gpx' from ./polar_training2gpx:79:in `block in <main>' from ./polar_training2gpx:78:in `open' from ./polar_training2gpx:78:in `<main>' ``` If I remove the the call to round(1), the gpx is created but the program I use to read it (turtlesport, it's open source) says it is invalid. No problem with my version <issue_comment>username_1: Pull & try again. The error you had is due to the fact that your device don't record temperature samples, should be fixed. I gave turtlesport a quick try -- it has been able to import my last session successfully. <issue_comment>username_0: still no luck. as expected the round() error does not happen, but I still get an error from turtlesport. What data do you want me to share to reproduce? <issue_comment>username_1: OK zip a training session folder (everything in ~/Polar/<serial>/U/0/<yyyymmdd>/E/<id>) and share it somewhere so that I can take a look at what happens
{'fraction_non_alphanumeric': 0.1464563221097245, 'fraction_numerical': 0.05910054155874735, 'mean_word_length': 4.027218934911242, 'pattern_counts': {'":': 0, '<': 12, '<?xml version=': 0, '>': 12, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 19}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '26434466', 'n_tokens_mistral': 1918, 'n_tokens_neox': 1765, 'n_words': 345}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Order of ConstantToken shouldn't matter username_0: ### Observed behavior: This parses successfully: ``` contract MyContract { uint constant internal a = 0; } ``` But this does not: ``` contract MyContract { uint internal constant a = 0; } ``` ### Expected Behavior Both should parse. Order of ConstantToken in DeclarativeExpression shouldn't matter.<issue_closed>
{'fraction_non_alphanumeric': 0.09557109557109557, 'fraction_numerical': 0.006993006993006993, 'mean_word_length': 4.0, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '7142357', 'n_tokens_mistral': 135, 'n_tokens_neox': 120, 'n_words': 42}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Prompt for exposure logging not showing up and not in setting either. username_0: When I first download the app and it says you’ll be asked to accept exposure login nothing shows up. When I go into exposure logging settings in the health privacy area says not available in your region and I am in Ontario. <issue_comment>username_2: Adding this to another open issue.<issue_closed>
{'fraction_non_alphanumeric': 0.03836930455635491, 'fraction_numerical': 0.004796163069544364, 'mean_word_length': 5.147058823529412, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 1, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '12045803', 'n_tokens_mistral': 100, 'n_tokens_neox': 98, 'n_words': 63}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Use partially applied function syntax to avoid wrapping by-name thunks. username_0: Given a by-name block named "thunk", the syntax (() => thunk) wraps the thunk in an extra Function0 object. However, by-name thunks are already Function0 instances, so a micro-optimization can be done by using the syntax thunk _ This tells the compiler that the by-name thunk should be partially applied (retained as an explicit function, not executed). This removes one layer of call indirection for the on*() callbacks in SActivity and other Creatable/Destroyable classes. <issue_comment>username_1: I didn't know it can be used in this way. Thanks!
{'fraction_non_alphanumeric': 0.056439942112879886, 'fraction_numerical': 0.005788712011577424, 'mean_word_length': 4.323076923076923, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '26440381', 'n_tokens_mistral': 201, 'n_tokens_neox': 185, 'n_words': 95}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: EN-3886: Secondary versions of dataset username_0: Add /dataset-version[resourceName] endpoint which returns - the version of the dataset in truth - the versions that secondaries have of dataset - set of feedback secondaries -- Requires commit in data-coordinator: 2c58d04ad479bd2ce9c8227c0596f0d4fe9d9cb9 -- <issue_comment>username_0: Pending checks: - Needs testing - data-coordinator pr <issue_comment>username_1: [soda-fountain #566](https://socrata-oss.ci.cloudbees.com/job/soda-fountain/566/) SUCCESS This pull request looks good <issue_comment>username_1: [soda-fountain #567](https://socrata-oss.ci.cloudbees.com/job/soda-fountain/567/) SUCCESS This pull request looks good <issue_comment>username_1: [soda-fountain #569](https://socrata-oss.ci.cloudbees.com/job/soda-fountain/569/) SUCCESS This pull request looks good
{'fraction_non_alphanumeric': 0.10388127853881278, 'fraction_numerical': 0.05707762557077625, 'mean_word_length': 5.264285714285714, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 3, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '2139528', 'n_tokens_mistral': 329, 'n_tokens_neox': 300, 'n_words': 69}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: How to use image search username_0: When I used image search, I returned all documents in elasticsearch. The plugin in this article has generated image feature vectors. <issue_comment>username_1: Generating image feature vectors is up to you. You can do it a few ways.. - just unroll the image into a vector (e.g. an image with height 28 and width 28 becomes a vector with length 784). - use an algorithm/library like phash to generate a more robust feature vector than just the raw pixel values - use a convolutional network to process the image but extract the values at the next-to-last layer instead of the classification layer. <issue_comment>username_1: I have briefly considered adding some functionality to the plugin to ingest images but there are many other things to solve first. It might be implemented as an ingest processor with a small handful of common algos for mapping images to vectors, e.g. phash, sift, a few convnets, etc.. <issue_comment>username_0: {'_index': 'long', '_type': '_doc', '_id': 'ids-1485050', '_score': 0.0625227} {'_index': 'long', '_type': '_doc', '_id': 'ids-1485146', '_score': 0.06249257} {'_index': 'long', '_type': '_doc', '_id': 'ids-1485177', '_score': 0.06245229} I used vgg16 in karas to obtain the image feature vector, and saved the image feature vector in elasticsearch7.40 version. However, when I used elasticsearch to query the image data, I found all of them. How can I obtain the image similarity diagram I want to query?How much does the _score attribute have to be similar?I'm using the L1 function here <issue_comment>username_1: I guess you mean they all had roughly the same score? L1 might not be a good similarity function for those vectors. I would try L2. <issue_comment>username_1: Here is an example of using L2 (on raw image pixels, not feature vectors): http://demo.elastiknn.klibisz.com/dataset/cifar-l2 You can see the exact mapping and query for each set of results by clicking on the `Mapping` and `Query` tabs. <issue_comment>username_0: That means I have to save the original image pixels in elasticsearch instead of saving the feature vectors. That's right. How does the elastiknn library create search queries? ``` { "query" : { "elastiknn_nearest_neighbors" : { "field" : "vec", "vec" : { "index" : "cifar-l2-lsh-2", "id" : "15231", "field" : "vec" }, "candidates" : 20, "similarity" : "l2", "model" : "lsh" } }, "size" : 10, "_source" : true } ``` <issue_comment>username_0: I mapped to create the elasticsearch index, python as follows: ``` from elastiknn.client import ElastiKnnClient from elastiknn.api import Mapping # create indies eknn = ElastiKnnClient() dim = 512 index = "long" field = "long" mapping = Mapping.DenseFloat(dims=dim) eknn.es.indices.refresh() eknn.es.indices.create(index=index) eknn.es.indices.refresh() m = eknn.put_mapping(index, field, mapping) print(m) # {'acknowledged': True} ``` <issue_comment>username_1: You can use that same mapping with L1, L2, and angular. If you want to use an approximate method you'll have to modify the line `mapping = Mapping.DenseFloat` to another mapping. Unfortunately it looks like I forgot to add a mapping dataclass for the L2 LSH method. That would go here: https://github.com/username_1/elastiknn/blob/master/client-python/elastiknn/api.py#L64 But you can also just create a dict matching the JSON and submit a PUT request. You can see how the mapping is submitted here: https://github.com/username_1/elastiknn/blob/master/client-python/elastiknn/client.py#L49-L54 You can save either the original pixels or the feature vector. I was just pointing to an example where L2 seems to work well on the original pixels. Most papers I've read also use L2 on feature vectors or they normalize the features vectors to unit norm and use angular. I don't think I've seen L1 used for images. For exact queries, the plugin creates a FunctionScoreQuery that scores every vector in the index against the query vector. So that's obviously not very efficient. For approximate queries it hashes the stored vectors, indexes the hashes (just like words), uses the same hash function to hash the query vector, and runs a boolean match query to lookup stored vectors which share the most hash values with the query vector. There's a lot more info here: http://elastiknn.klibisz.com/api/ <issue_comment>username_0: Thank you. I'll try it first <issue_comment>username_0: I made the following error while creating the elasticsearch index: ``` Traceback (most recent call last): File "D:/zhenzi/es7.4.0/main_create.py", line 14, in <module> m = eknn.put_mapping(index, field, mapping) File "D:\zhenzi\es7.4.0\elastiknn\client.py", line 56, in put_mapping return self.es.transport.perform_request("PUT", f"/{index}/_mapping", body=body) File "F:\py368\Envs\knn\lib\site-packages\elasticsearch\transport.py", line 358, in perform_request timeout=timeout, File "F:\py368\Envs\knn\lib\site-packages\elasticsearch\connection\http_urllib3.py", line 257, in perform_request self._raise_error(response.status, raw_data) File "F:\py368\Envs\knn\lib\site-packages\elasticsearch\connection\base.py", line 182, in _raise_error status_code, error_message, additional_info elasticsearch.exceptions.TransportError: TransportError(500, '', 'Incompatible type [elastiknn_dense_float_vector], model [Some(lsh)], similarity [None]') ``` Add in the file (https://github.com/username_1/elastiknn/blob/master/client-python/elastiknn/api.py) the following: ``` @dataclass(frozen=True) class DenseFloatLong(Base): dims: int def to_dict(self): return { "type": "elastiknn_dense_float_vector", "elastiknn": { "model": "lsh", "dims": self.dims, "similarity": "12", "bands": 100, "rows": 1, "width": 3 } } ``` <issue_comment>username_1: Try `"similarity": "l2"`, not `"similarity": "12"`. The error isn't particularly helpful, but `similarity [None]` means it wasn't able to match "12" to a known similarity. <issue_comment>username_0: Do not "similarity": "12".Again, the following error message: ``` Traceback (most recent call last): File "D:/zhenzi/es7.4.0/main_create.py", line 14, in <module> m = eknn.put_mapping(index, field, mapping) File "D:\zhenzi\es7.4.0\elastiknn\client.py", line 56, in put_mapping return self.es.transport.perform_request("PUT", f"/{index}/_mapping", body=body) File "F:\py368\Envs\knn\lib\site-packages\elasticsearch\transport.py", line 358, in perform_request timeout=timeout, File "F:\py368\Envs\knn\lib\site-packages\elasticsearch\connection\http_urllib3.py", line 257, in perform_request self._raise_error(response.status, raw_data) File "F:\py368\Envs\knn\lib\site-packages\elasticsearch\connection\base.py", line 182, in _raise_error status_code, error_message, additional_info elasticsearch.exceptions.TransportError: TransportError(500, '', 'Incompatible type [elastiknn_dense_float_vector], model [Some(lsh)], similarity [None]') ``` Create an elasticsearch index file as follows: ``` from elastiknn.client import ElastiKnnClient from elastiknn.api import Mapping # create indies eknn = ElastiKnnClient() dim = 3072 index = "test" field = "test" mapping = Mapping.DenseFloatLong(dims=dim) eknn.es.indices.refresh() eknn.es.indices.create(index=index) eknn.es.indices.refresh() m = eknn.put_mapping(index, field, mapping) print(m) # {'acknowledged': True} ``` api.py ``` @dataclass(frozen=True) class DenseFloatLong(Base): dims: int def to_dict(self): return { "type": "elastiknn_dense_float_vector", "elastiknn": { "model": "lsh", "dims": self.dims, # "similarity": 12, "bands": 100, "rows": 1, "width": 3 } } ``` I used elasticsearch version 7.4.0 <issue_comment>username_1: You need to specify the similarity as `l2`. ``` @dataclass(frozen=True) class DenseFloatLong(Base): dims: int def to_dict(self): return { "type": "elastiknn_dense_float_vector", "elastiknn": { "model": "lsh", "dims": self.dims, "similarity": "l2", "bands": 100, "rows": 1, "width": 3 } } ``` The similarity field is required when using the lsh model. It's a very subtle character difference. `l2` is the lowercase of `L2`. `12` is the number twelve. <issue_comment>username_0: thank you. I made a mistake between the number 1 and the letter l.You can create it successfully <issue_comment>username_0: Why does the query result differ from what I expected. The query results are as follows: ``` {'_index': 'test', '_type': '_doc', '_id': 'ids-1485205', '_score': 1000000.0} {'_index': 'test', '_type': '_doc', '_id': 'ids-1485185', '_score': 1.2629013} {'_index': 'test', '_type': '_doc', '_id': 'ids-1485238', '_score': 1.2498195} {'_index': 'test', '_type': '_doc', '_id': 'ids-1485149', '_score': 1.2451644} {'_index': 'test', '_type': '_doc', '_id': 'ids-1485198', '_score': 1.2327285} {'_index': 'test', '_type': '_doc', '_id': 'ids-1485219', '_score': 1.2177316} {'_index': 'test', '_type': '_doc', '_id': 'ids-1485212', '_score': 1.1902684} {'_index': 'test', '_type': '_doc', '_id': 'ids-1485229', '_score': 1.1901888} {'_index': 'test', '_type': '_doc', '_id': 'ids-1485152', '_score': 1.1610229} {'_index': 'test', '_type': '_doc', '_id': 'ids-1488300', '_score': 0.0} {'_index': 'test', '_type': '_doc', '_id': 'ids-1485209', '_score': 0.0} {'_index': 'test', '_type': '_doc', '_id': 'ids-1485208', '_score': 0.0} {'_index': 'test', '_type': '_doc', '_id': 'ids-1488289', '_score': 0.0} {'_index': 'test', '_type': '_doc', '_id': 'ids-1485203', '_score': 0.0} {'_index': 'test', '_type': '_doc', '_id': 'ids-1485202', '_score': 0.0} ``` How are score values scored? All the pictures in elasticsearch have some similarities. For example, all my pictures have the words "children's day" <issue_comment>username_1: I'm not sure what you are expecting. :) You can read some more about the scoring method here: http://elastiknn.klibisz.com/api/#similarity-scoring <issue_comment>username_1: @username_0 I added the missing mappings and queries in this PR #68 Docs are here: http://elastiknn.klibisz.com/python-client/ <issue_comment>username_1: One thing to consider when doing image search with L2 is that the floating point operations might overflow if your vector has large values. You might try to scale your vectors values so they are between 0 and 1. <issue_comment>username_0: What's a good python library for generating image feature vectors?Currently, I use the image feature vector which is similar to [0.0,0.2...].In this format <issue_comment>username_1: I've always used the pretrained models from Keras: https://keras.io/api/applications/ <issue_comment>username_1: Closing this. Let me know if there are any other questions and we can open it again if needed.<issue_closed>
{'fraction_non_alphanumeric': 0.12060517493337918, 'fraction_numerical': 0.03601822401788017, 'mean_word_length': 3.438763830599008, 'pattern_counts': {'":': 27, '<': 26, '<?xml version=': 0, '>': 26, 'https://': 4, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '1327864', 'n_tokens_mistral': 3919, 'n_tokens_neox': 3583, 'n_words': 1242}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Issues with gbimage-bridge username_0: <!-- To make it easier for me to help you, please include as much useful information as possible. Before opening a new issue, please search existing issues https://github.com/username_1/gatsby-background-image/issues --> ## Description Hello there, I am here because i can't seem to find any solutions to this problem. In short with gatsby v3 the gatsby-background-image and gbimage-bridge worked flawlessly and now with the v4 update nothing works at all. Gatsby-image-background installs perfectly fine but gbimage-bridge throws a laundry list of errors. Forcing the install is useless as it does nothing and still doesn't work. Is anyone else experiencing this as well or am I just crazy and a really bad developer lol. I removed both the dependancies as they weren't useful because they didn't work. <!-- Provide as much useful information as you can about the issue that you're seeing. --> ### Steps to reproduce <!-- Clear steps describing how to reproduce the issue. Please link to a demo project if possible, this makes your issue _much_ easier to diagnose (seriously). Perhaps just clone https://github.com/username_1/gbitest and try to replicate your issue there? --> ### Expected result Working plugin. <!-- What should happen? --> ### Actual result The errors that were being thrown were...... npm ERR! code ERESOLVE npm ERR! ERESOLVE unable to resolve dependency tree npm ERR! npm ERR! While resolving: gatsby-starter-default@0.1.0 npm ERR! Found: gatsby-plugin-image@2.4.0 npm ERR! node_modules/gatsby-plugin-image npm ERR! gatsby-plugin-image@"^2.4.0" from the root project npm ERR! npm ERR! Could not resolve dependency: npm ERR! peer gatsby-plugin-image@"^1.0.1" from gbimage-bridge@0.2.0 npm ERR! node_modules/gbimage-bridge npm ERR! gbimage-bridge@"*" from the root project npm ERR! npm ERR! Fix the upstream dependency conflict, or retry npm ERR! this command with --force, or --legacy-peer-deps npm ERR! to accept an incorrect (and potentially broken) dependency resolution. <!-- What happened. --> ### Environment System: OS: macOS 12.0.1 CPU: (8) arm64 Apple M1 Shell: 5.8 - /bin/zsh Binaries: Node: 16.6.2 - /usr/local/bin/node npm: 7.21.1 - /opt/homebrew/bin/npm Languages: Python: 2.7.18 - /usr/bin/python Browsers: Chrome: 96.0.4664.110 Firefox: 91.0.2 Safari: 15.1 npmPackages: gatsby: ^4.1.3 => 4.1.3 gatsby-plugin-gatsby-cloud: ^4.1.1 => 4.1.1 gatsby-plugin-google-analytics: ^4.1.0 => 4.1.0 gatsby-plugin-google-fonts: ^1.0.1 => 1.0.1 gatsby-plugin-google-tagmanager: ^4.1.1 => 4.1.1 gatsby-plugin-image: ^2.4.0 => 2.4.0 gatsby-plugin-mailchimp: ^5.2.2 => 5.2.2 gatsby-plugin-manifest: ^4.1.2 => 4.1.2 gatsby-plugin-offline: ^5.1.2 => 5.1.2 gatsby-plugin-react-helmet: ^5.1.0 => 5.1.0 gatsby-plugin-robots-txt: ^1.6.14 => 1.6.14 gatsby-plugin-sharp: ^4.1.2 => 4.1.2 gatsby-plugin-sitemap: ^5.1.1 => 5.1.1 gatsby-plugin-styled-components: ^5.1.0 => 5.1.0 gatsby-plugin-web-font-loader: ^1.0.4 => 1.0.4 gatsby-source-filesystem: ^4.1.1 => 4.1.1 gatsby-transformer-sharp: ^4.1.0 => 4.1.0 npmGlobalPackages: gatsby-cli: 4.4.0 gatsby: 4.3.0 <!-- Required. Run `gatsby info --clipboard` in your gatsby project directory and paste its contents here. Should your CLI of choice not support copying directly to clipboard, just copy & paste `gatsby info`'s output here - both ways enclosing it within a code block (tripple backticks / ```) would be great! --> <issue_comment>username_1: Hi @username_0! Fixed the peer dependencies in the just published `gbimage-bridge@v0.2.1`, would you care to try it? Happy Holidays! Best, Tim. <issue_comment>username_0: Absolutely would love to. <issue_comment>username_1: When it works, feel free to close the issue : ). And no godness required, thanks for the error msgs ^^. <issue_comment>username_0: lol. No problem. =)<issue_closed>
{'fraction_non_alphanumeric': 0.11530094271211022, 'fraction_numerical': 0.04278462654097172, 'mean_word_length': 2.990356798457088, 'pattern_counts': {'":': 0, '<': 13, '<?xml version=': 0, '>': 30, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '28477325', 'n_tokens_mistral': 1553, 'n_tokens_neox': 1417, 'n_words': 469}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Polymer 2.0 - issue with dom-repeat inside of html table (using 2.0 composition syntax) username_0: Hey all, I'm working on upgrading some Polymer v1.1 samples to v2.0. I'm trying out the new template binding syntax (using composition) as such: <base href="http://polygit.org/polymer+:master/components/"> <link href="polymer/polymer-element.html" rel="import"> <link href="polymer/polymer.html" rel="import"> <dom-module id="dom-element"> <template> <table> <tr><th>Numbers</th></tr> <dom-repeat items="[1,2,3]"> <template> <tr><td>[[item]]</td></tr> </template> </dom-repeat> </table> </template> <script> class DomElement extends Polymer.Element { static get is() { return "dom-element"; } } customElements.define(DomElement.is, DomElement); </script> </dom-module> The output from this is that the numbers in the dom-repeat display above the header. An inspection of the dom itself shows the <tr> elements first, then <dom-repeat>, then <table> element. Is this "composition" syntax for dom-repeat templates not ready yet? If so, it's certainly not an issue for me. Just wanted to report back on what I'm seeing. <issue_comment>username_1: I thought the doc was pretty clear that you should only convert `dom-repeat`, `dom-if` and `dom-bind` elements at the document level, not inside an element template. https://www.polymer-project.org/2.0/docs/upgrade#remove-type-extension-elements https://www.polymer-project.org/2.0/docs/devguide/templates#dom-repeat And this is one of the main reasons—the HTML parser has rules about what can go inside a `<table>`. `<template>` is on that list, and `<dom-repeat>` isn't, so the `<dom-repeat>` gets evicted from the table. If you use `<template is="dom-repeat">`, it should work fine. <issue_comment>username_0: Gotcha, thanks Arthur<issue_closed>
{'fraction_non_alphanumeric': 0.12063023141309699, 'fraction_numerical': 0.008862629246676515, 'mean_word_length': 3.096774193548387, 'pattern_counts': {'":': 0, '<': 36, '<?xml version=': 0, '>': 36, 'https://': 2, 'lorem ipsum': 0, 'www.': 2, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '4820148', 'n_tokens_mistral': 613, 'n_tokens_neox': 572, 'n_words': 210}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Autofill rules from a CSV username_0: First of all, I really appreciate you all for the super great work you people have been doing. I have values of rules in a CSV file. Whenever I upload a CSV file, my rule's fields should be auto populated with values from that particular CSV file. I did some research and thought setrules might help but I either need an SQL query or JSON to auto-populate using setRules or using setRulesFromSQL. I need your help in auto-generating that SQL query with values from my CSV so that I can use setRulesFromSQL or some other optimised solution to solve my issue. It would mean a lot if you could help me in this regard, thanks in advance. <issue_comment>username_0: I got this resolved myself. I read the filter names, operators and values as a 3D matrix from the CSV file and use setRulesFromSQL. Please let me know if you guys have any optimised solution. Thanks a lot again for providing us with such great plugins and all. Thank you.<issue_closed> <issue_comment>username_1: I would have used "getRules", there is no point in using "setRulesFromSQL" which will simply parse the SQL and call "getRules" itself. You are only multiplying the source of potential errors. The paylod expected by "setRules" is documented https://querybuilder.js.org/#method-setRules => "See an example"
{'fraction_non_alphanumeric': 0.040530582166543844, 'fraction_numerical': 0.0029476787030213707, 'mean_word_length': 4.520325203252033, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 6, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '26379369', 'n_tokens_mistral': 342, 'n_tokens_neox': 330, 'n_words': 218}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Result table username_0: <issue_comment>username_1: Hi, it might be related to this, but I'm trying to generate fingerprints from custom source using the pretranied model you shared here: https://github.com/username_0/neural-audio-fp/issues/10#issuecomment-878335408 and I was wondering if you could tell me what's the expectated time for generating a fingerprint from a single query? Since it took 1629seconds to generate fingerprints corresponding to 2 queries (1-min length) [even though in the source directory there are 3 wav files, I'm studying why this as well ] `From the CLI Output: 2/2 [==============================] - 1629s 47ms/step` I'm using a 40-cpu server with a RTX3090. Also, can you help me understanding the shape of the resulting db? I understand that the shape is `n_items x d`, and `n_items` is `#num audios x batchsize`. I don't see what this batchsize mean and therefore, the resulting db shape. Thanks in advance! <issue_comment>username_0: @username_1 I opened the issue as a question #23.
{'fraction_non_alphanumeric': 0.09022556390977443, 'fraction_numerical': 0.03665413533834586, 'mean_word_length': 4.851648351648351, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '30610594', 'n_tokens_mistral': 323, 'n_tokens_neox': 283, 'n_words': 149}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Protected constructors should be accessible on subclasses's static methods username_0: I think the below should be allowed. ```ts class A { protected constructor() { } } class B extends A { static create() { new B(); // ERROR, but should not error. } } ``` I wanted to override a static factory and customise it a bit. But it seems like I wasn't allowed to do it. And my motivation is that private constructors are accessible on a class's static methods. ```ts class A { static create() { new this(); } protected constructor() { } } ``` And a natural extension of the private case is to allow protected constructors on subclasses's static methods. <issue_comment>username_1: Definitely a bug; this should be equivalent to the allowed ```ts class A { protected constructor() { } } class B extends A { protected constructor() { super(); } static create() { new B(); // OK } } ```<issue_closed>
{'fraction_non_alphanumeric': 0.0905624404194471, 'fraction_numerical': 0.0019065776930409914, 'mean_word_length': 2.6458333333333335, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '17683518', 'n_tokens_mistral': 301, 'n_tokens_neox': 288, 'n_words': 127}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Bug: Command Handler Not Functioning Properly username_0: **Describe the bug** I Am trying to add a command filter.js but in the help its showing undifined **How To Reproduce** Steps to reproduce the behavior: 1. Add The Command filter.js [see my repl.it][repl.it/@pro260/MUSIBOTMEH] 2. Type the help command in the server [###help] **Expected behavior** Command should take place properly **Environment (add if possible)** * Node.js version: 12.16.1 **Additional information & screenshots** Add any other context or screenshots about the problem here ![image](https://user-images.githubusercontent.com/75003322/100535183-a30e9300-323c-11eb-8848-485167dc6c2e.png) ![image](https://user-images.githubusercontent.com/75003322/100535190-b15caf00-323c-11eb-9202-409a09e1ea3d.png) <issue_comment>username_1: Do you use it properly? It must be like ``` module.exports = { name: 'filter', aliases: ["aliases here"], execute(message) { //your code here } } ``` or with async and args ``` module.exports = { name: 'filter', aliases: ["aliases here"], async execute(message, args) { //your code here } } ``` <issue_comment>username_2: Clearly not a bug, don't open issues for programming support.<issue_closed>
{'fraction_non_alphanumeric': 0.12095531587057011, 'fraction_numerical': 0.06933744221879815, 'mean_word_length': 4.175298804780876, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '1093907', 'n_tokens_mistral': 493, 'n_tokens_neox': 434, 'n_words': 128}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Menacing fleetd logs username_0: Every 5min or so, something like this will appear in the fleetd logs: ``` Sep 09 16:47:33 core-01 fleetd[1375]: INFO client.go:278: Failed getting response from http://localhost:4001/: unexpected EOF Sep 09 16:47:33 core-01 fleetd[1375]: ERROR client.go:200: Unable to get result for {Watch /_coreos.com/fleet/job}, retrying in 100ms ``` This is due to etcd closing all watcher connections ever 5min. It would be great to handle this error gracefully and not log, as this is a total red herring when debugging real issues. <issue_comment>username_1: :+1: <issue_comment>username_2: I tried to find out places where such error messages are printed out, but couldn't find any. Probably it has been already gone. So I'm closing it.<issue_closed>
{'fraction_non_alphanumeric': 0.07926829268292683, 'fraction_numerical': 0.05731707317073171, 'mean_word_length': 4.822695035460993, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '30944892', 'n_tokens_mistral': 279, 'n_tokens_neox': 240, 'n_words': 113}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Fix issues related to consolidation of warning/error messages username_0: These are issues left over from the #2319 PR. Here's a sampling of what's currently generated, based on a creating fake data with a variety of errors: Acceptance Criteria: - <issue_comment>username_0: Here's a sampling of what the OLD warnings look like (currently still in the json): ``` { "scope" : "ITEM", "id" : "SomeEntityID", "details" : "Invalid code: 'Q' is not a display value in altValues set for 'processingModeCode' ('processing_mode_code')" }, { "scope" : "ITEM", "id" : "SomeEntityID", "details" : "Invalid date: '999999-0400' for element 'testReportDate' ('test_result_report_date')" }, { "scope" : "ITEM", "id" : "SomeEntityID", "details" : "Invalid date: '9' for element 'testOrderedDate' ('order_test_date')" }, { "scope" : "ITEM", "id" : "SomeEntityID", "details" : "Invalid date: '999' for element 'specimenCollectedDate' ('specimen_collection_date_time')" }, { "scope" : "ITEM", "id" : "SomeEntityID", "details" : "Invalid postal code '85714123123123123' for 'performingFacilityZip' ('testing_lab_zip_code')" }, { "scope" : "ITEM", "id" : "SomeEntityID", "details" : "Invalid postal code 'Not a zip code' for 'orderingFacilityZip' ('ordering_facility_zip_code')" } ], ``` And here's a sampling of what the new consolidated warnings/errors look like: ``` "consolidatedErrors" : [ { "message" : "Blank value for element ", "rows" : "Rows: 2, 4, 6, 8, 10, 12, 14, 16, 18–19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 122–152" } ], "consolidatedWarnings" : [ { "message" : "Invalid code: 'Q' is not a display value in altValues set for 'processingModeCode' ('processing_mode_code')", "rows" : "Rows: 2, 4, 6, 8, 10, 12, 14, 16, 18–19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 122–152" }, { "message" : "Invalid date: '9' for element 'testOrderedDate' ('order_test_date')", "rows" : "Rows: 2, 4, 6, 8, 10, 12, 14, 16, 18–19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 122–152" }, { "message" : "Invalid postal code '85714123123123123' for 'performingFacilityZip' ('testing_lab_zip_code')", "rows" : "Rows: 2, 4, 6, 8, 10, 12, 14, 16, 18–19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 122–152" }, { "message" : "Invalid postal code 'Not a zip code' for 'orderingFacilityZip' ('ordering_facility_zip_code')", "rows" : "Rows: 2, 4, 6, 8, 10, 12, 14, 16, 18–19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 122–152" }, { "message" : "Invalid date: '999' for element 'specimenCollectedDate' ('specimen_collection_date_time')", "rows" : "Rows: 2, 4, 6, 8, 10, 12, 14, 16, 18–19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 122–152" }, { "message" : "Invalid date: '999999-0400' for element 'testReportDate' ('test_result_report_date')", "rows" : "Rows: 2, 4, 6, 8, 10, 12, 14, 16, 18–19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, 41, 122–152" }, { "message" : "Unexpected 'xxxsenderId' header is ignored", "rows" : "" } ] ``` <issue_comment>username_1: Hey @username_0! I took care of steps 1-4 in this ticket (all the code changes). Working with @anshulkumar-usds now on the release notes; I'm a bit confused, though. Is this affecting `/api/waters`? <issue_comment>username_0: /api/waters is identical to the current /api/reports, but with a different authentication style. So it would affect both.<issue_closed>
{'fraction_non_alphanumeric': 0.1648664013644116, 'fraction_numerical': 0.10830017055144969, 'mean_word_length': 3.9633286318758816, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 28, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '12304353', 'n_tokens_mistral': 1630, 'n_tokens_neox': 1264, 'n_words': 434}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Use pagination for all endpoints returning potentially large collections username_0: Part of https://github.com/microsoft/CCF/issues/1918. https://github.com/microsoft/api-guidelines/blob/vNext/Guidelines.md#981-server-driven-paging "Return collections with server-side paging, even if your resource does not currently need paging." ([Azure guidelines](https://github.com/microsoft/api-guidelines/blob/vNext/azure/Guidelines.md#advice-for-new-services)) Scales with number of nodes: - `/quotes` - `/network_info` Scales with number of code versions: - `/code` <issue_comment>username_1: Since we don't have ordered iteration over `kv::Map`s, its not easy to do this correctly while those collections might be changing. If you've received 10 nodes in the first page, and a node is added, there may be missing/repeated nodes on the remaining pages. We could get around this by caching the full collection on the node at the point of the first request, and pointing at this with the continuation URL, but this is flaky (node-local, memory use, requires some usage limit so is only a mitigation). The guidelines don't seem to talk about this, how collection changes affect pagination. Maybe we just include an ETag so any collection changes mid-page-iteration result in an error from the server? <issue_comment>username_2: Just an FYI - if you don't plan on support client-driven paging, the guidelines specify that an error response must be provided. https://github.com/microsoft/api-guidelines/blob/vNext/Guidelines.md#162-feature-allow-list <issue_comment>username_3: @username_2 for the reasons Eddy suggests, paginating reads from the current state will either need to be stateful, or will sometimes end up doing something that is counter intuitive for most users. Historical reads on the other hand should be straightforward. For the time being, let's return an error on `quotes`, `network_info` and `code` if a user checks for pagination support. The number of entries returned by each of these endpoints is quite unlikely to even reach two digits in any event. <issue_comment>username_0: There is no way to check for support. You would simply get a 400 back if you received any unsupported filter/sorting parameter, and I think this is already the case now. The only thing that could be improved is the included error message, but it may not be worth the effort. <issue_comment>username_3: Technically we comply, but we could return something a little friendlier like: ``` HTTP/1.1 400 Bad Request Content-Type: application/json { "error": { "code": "ErrorUnsupportedOrderBy", "message": "Ordering by name is not supported." } } ``` I don't know that we want to suffer the performance hit of checking for all possible flags on every RPC though, so let's leave it that way. <issue_comment>username_3: Until we have built-in endpoints that run historical queries, I don't think we will have anything we implement that meets the definition of "potentially large collections", so I am closing that for now.<issue_closed>
{'fraction_non_alphanumeric': 0.06497266001929881, 'fraction_numerical': 0.009006111289803796, 'mean_word_length': 4.553571428571429, 'pattern_counts': {'":': 3, '<': 9, '<?xml version=': 0, '>': 9, 'https://': 4, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '24178035', 'n_tokens_mistral': 841, 'n_tokens_neox': 789, 'n_words': 413}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: append to import list? username_0: When you have `groupImports: false` and `sortImport: false`, any new import is `prepended` to the top of the file. If this was being done manually you'd normally always append it to your imports like: ```javascript import old from 'old' import new from 'new' ``` and not this, which is the current functionality: ```javascript import new from 'new' import old from 'old' ``` Maybe a new option called `newImportsDirection`, so you can choose? i.e. `newImportsDirection: 'append'` would be a good way to solve this issue? <issue_comment>username_1: Maybe new imports should always be added to the bottom when sorting and grouping are turned off? I can't really see a use case for adding them to the top. I probably should've done this when I added the 'sortImports' option :P <issue_comment>username_2: In lib/ImportStatements.js reverse the result in the _toGroups() with .reverse() let result = partition( importsArray, (importStatement: ImportStatement): boolean => !importStatement.isParsedAndUntouched(), ).reverse(); <issue_comment>username_3: @username_2 do you want to push that as a PR? I'm happy to review and merge.
{'fraction_non_alphanumeric': 0.07907348242811502, 'fraction_numerical': 0.003993610223642172, 'mean_word_length': 4.2426778242677825, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 6, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '3215066', 'n_tokens_mistral': 358, 'n_tokens_neox': 341, 'n_words': 168}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Batch Size error username_0: Hello, I wanted to use Roberta for Sentence classification on protein sequences which I have converted into sentences. So first I train a tokenizer for my custom vocabulary. ```tokenizer = CharBPETokenizer() tokenizer.train("bert_vocab.txt",vocab_size=8000,special_tokens=[ "[CLS]", "[SEP]", "[UNK]", "[MASK]", ]) tokenizer.save_model("EsperBERTo") from tokenizers.implementations import CharBPETokenizer from tokenizers.processors import BertProcessing tokenizer._tokenizer.post_processor = BertProcessing(("[SEP]", tokenizer.token_to_id("[SEP]")), ("[CLS]", tokenizer.token_to_id("[CLS]")),) print(tokenizer.encode("AAA ATA AKA")) print(tokenizer.encode("AAA ATA AKA").tokens) tokenizer.enable_truncation(max_length=512) import torch torch.cuda.is_available() from transformers import RobertaConfig config = RobertaConfig( vocab_size=8000, max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=6, type_vocab_size=1, ) from transformers import RobertaTokenizerFast tokenizer = RobertaTokenizerFast.from_pretrained("./EsperBERTo", max_len=512) ``` When i try to train the model, i get the error `ValueError: Expected input batch_size (64) to match target batch_size (8192).` ``` model = RobertaForSequenceClassification(config=config) model.num_parameters() from transformers import DataCollatorForLanguageModeling from transformers import LineByLineTextDataset dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path="bert_data.txt", block_size=64, ) data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer) from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="./EsperBERTo", overwrite_output_dir=True, num_train_epochs=100, per_device_train_batch_size=64, save_steps=10_000, save_total_limit=2, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, prediction_loss_only=True, ) trainer.train() trainer.save_model("./EsperBERTo") ``` I understand i do not specify the output labels anywhere in the code but I am yet to find any example which I could follow to figure this. <issue_comment>username_1: It would be great to have all the information relative to your environment, as asked in the template, as well as the full error stack trace so that we may help you. <issue_comment>username_0: ``` trainer.train() File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/transformers/trainer.py", line 499, in train tr_loss += self._training_step(model, inputs, optimizer) File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/transformers/trainer.py", line 622, in _training_step outputs = model(**inputs) File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/transformers/modeling_roberta.py", line 357, in forward loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 932, in forward ignore_index=self.ignore_index, reduction=self.reduction) File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/torch/nn/functional.py", line 2317, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/torch/nn/functional.py", line 2113, in nll_loss .format(input.size(0), target.size(0))) ValueError: Expected input batch_size (64) to match target batch_size (8192). ``` For my environment I had `pytorch == 1.6` on a Linux based system but while trying to solve this i have mixed up alot of my packages. <issue_comment>username_0: @username_1 the issue is still there in `pytorch== 1.7.0` <issue_comment>username_1: Could you please show us the result of running `transformers-cli env`? <issue_comment>username_0: @username_1 ``` - `transformers` version: 3.0.2 - Platform: Linux-3.10.0-693.5.2.el7.x86_64-x86_64-with-centos-7.4.1708-Core - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.0 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?:yes - Using distributed or parallel set-up in script?: < ``` <issue_comment>username_1: Maybe @username_2 has an idea of what could be going on. <issue_comment>username_2: You're using `DataCollatorForLanguageModeling`, with a model for sequence classification. It can't work as `DataCollatorForLanguageModeling` prepares the labels for language modeling by duplicating the inputs, whereas you should have one label per sentence in your batch. <issue_comment>username_0: @username_2 Thanks for the help. I tried to use `DataCollatorForTokenClassification` but that throws ``` ]Traceback (most recent call last): File "m1.py", line 122, in <module> trainer.train() File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/transformers/trainer.py", line 747, in train tr_loss += self.training_step(model, inputs) File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/transformers/trainer.py", line 1075, in training_step loss = self.compute_loss(model, inputs) File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/transformers/trainer.py", line 1105, in compute_loss return outputs["loss"] if isinstance(outputs, dict) else outputs[0] File "/home/bhavay18384/.conda/envs/myenv/lib/python3.7/site-packages/transformers/file_utils.py", line 1338, in __getitem__ return inner_dict[k] KeyError: 'loss' ``` <issue_comment>username_2: You're doing sequence classification, not token classification. Also `LineByLineDataset` can't be used as it doesn't deal with labels. For doing text classification, you should look at the [run_glue example](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py). Also, we don't use issues to debug user's code, so please switch to the [forum](https://discuss.huggingface.co/) where there is a bigger community of people that will be able to help.<issue_closed>
{'fraction_non_alphanumeric': 0.11074577791062622, 'fraction_numerical': 0.035271259901360034, 'mean_word_length': 4.544324772162386, 'pattern_counts': {'":': 0, '<': 14, '<?xml version=': 0, '>': 13, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '20534', 'n_tokens_mistral': 2369, 'n_tokens_neox': 2195, 'n_words': 547}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Likely incorrect results username_0: I am not really sure about the results of the plots. I have GP and ML open for all, but for some reason after 1h despite having not changed anything for the demand and having congested conditions, the flow on the Managed Lanes drops. ![image](https://user-images.githubusercontent.com/60490284/107437729-46551180-6ae4-11eb-9035-fd70fed1277a.png) <issue_comment>username_1: @username_0 Ramp metering goes to fixed rate at 1hr. <issue_comment>username_1: @username_0 Please check this and close if it is fixed.<issue_closed>
{'fraction_non_alphanumeric': 0.06867671691792294, 'fraction_numerical': 0.07705192629815745, 'mean_word_length': 5.571428571428571, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '23822385', 'n_tokens_mistral': 203, 'n_tokens_neox': 176, 'n_words': 71}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Solving error on this class username_0: I'm having this error at the moment I visit the NelmioApiDoc URL. Notice: Undefined index: required KernelException.php on line 57: GetResponseForExceptionEvent {#1191 ▼ -exception: ContextErrorException {#3274 ▼ -context: array:7 [▶] #message: "Notice: Undefined index: required" #code: 0 #file: "/var/www/api-onereach-basic/vendor/nelmio/api-doc-bundle/Nelmio/ApiDocBundle/Formatter/AbstractFormatter.php" #line: 75 #severity: E_NOTICE trace: {▼ /var/www/api-onereach-basic/vendor/nelmio/api-doc-bundle/Nelmio/ApiDocBundle/Formatter/AbstractFormatter.php:75 {▼ › 'readonly' => array_key_exists('readonly', $info) ? $info['readonly'] : null, › 'required' => $info['required'], › 'default' => array_key_exists('default', $info) ? $info['default'] : null, } /var/www/api-onereach-basic/vendor/nelmio/api-doc-bundle/Nelmio/ApiDocBundle/Formatter/AbstractFormatter.php:125 {▶} /var/www/api-onereach-basic/vendor/nelmio/api-doc-bundle/Nelmio/ApiDocBundle/Formatter/AbstractFormatter.php:161 {▶} /var/www/api-onereach-basic/vendor/nelmio/api-doc-bundle/Nelmio/ApiDocBundle/Formatter/AbstractFormatter.php:35 {▶} /var/www/api-onereach-basic/vendor/nelmio/api-doc-bundle/Nelmio/ApiDocBundle/Controller/ApiDocController.php:26 {▶} /var/www/api-onereach-basic/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/HttpKernel.php:151 {▶} /var/www/api-onereach-basic/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/HttpKernel.php:68 {▶} /var/www/api-onereach-basic/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Kernel.php:202 {▶} /var/www/api-onereach-basic/web/app_dev.php:13 {▼ › $request = Request::createFromGlobals(); › $response = $kernel->handle($request); › $response->send(); arguments: {▶} } } } -allowCustomResponseCode: false -response: null -kernel: HttpKernel {#1696 ▶} -request: Request {#10 ▶} -requestType: 1 -propagationStopped: false } <issue_comment>username_1: i think version 2 is no longer maintained, hence all the build failures. is this problem also present in version 3? <issue_comment>username_0: Sorry, but the version that we are installing is following your documentation published on the Symfony site. The hotfix is a minimal change and we are using this version because the Symfony site is showing it. It would be very helpful. <issue_comment>username_1: ping @username_2 <issue_comment>username_2: It's supposed to always exist normally but I guess it doesn't cost to merge this. Thank you @username_0!
{'fraction_non_alphanumeric': 0.12595281306715064, 'fraction_numerical': 0.018874773139745917, 'mean_word_length': 3.570480928689884, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 11, 'https://': 0, 'lorem ipsum': 0, 'www.': 10, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '26836141', 'n_tokens_mistral': 954, 'n_tokens_neox': 902, 'n_words': 197}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Convert Headers to use Map as underlying data structure username_0: Currently `Headers` is based of a `Chain`. This we need to traverse the whole `Chain` to find values in it (`O(n)`). At a large scale, this becomes an expensive operation, especially in our case where we need to extract two headers which may or may not be there. The operation is clearly visible in the profiler. This PR changes `Headers` to be based of `Map` instead, making lookups `O(1)`. If you guys think this is useless, let me know! Perhaps `Headers` used to be based on `Map` but it was inefficient or wrong in some other way? **Not addressed in this PR**: The underlying Kafka `Headers` class actually allows to have multiple values for the same key. The current `fs2-kafka` `Headers` class doesn't really allow it because the main function is: ```scala final def apply(key: String): Option[Header] ``` I believe this should be changed into either: ```scala final def apply(key: String): List[Header] ``` or ```scala final def apply(key: String): Option[NonEmptyList[Header]] ``` However this PR does not address that. <issue_comment>username_1: Thanks for this @username_0! I believe the current `Headers` implementation was meant to be consistent with the Java Kafka one, specifically in terms of allowing multiple entries with the same key. I can't recall why this is allowed in the Java implementation, but if we can figure out there's no real reason for it, then I'm happy to go ahead with this. :+1: <issue_comment>username_0: I did some performance testing with a high load with the following properties: - 90k messages per second from Kafka - The headers are *always* present - Looking for two headers (and I think there are 3 in the messages) With that setup and this branch I got 1.41x better performance (observed in the profiler for that single function). I would be curious to try another thing - use `Array` as the underlying data structure and traversing that. I'll test that and report back. <issue_comment>username_1: As such, I reckon we should allow multiple headers for the same key. I'm going to close this for now, but would definitely be great to explore performance improvements while still allowing multiple values for the same key. <issue_comment>username_0: @username_1 that's fair! One quick (albeit tiny) optimisations would be to use an Array instead of `Chain`.
{'fraction_non_alphanumeric': 0.06324435318275154, 'fraction_numerical': 0.006570841889117043, 'mean_word_length': 4.574370709382151, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '10684289', 'n_tokens_mistral': 643, 'n_tokens_neox': 612, 'n_words': 378}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: are you still maintain this project? username_0: There is another similar repo, [antd-schema-form](https://github.com/duan602728596/antd-schema-form), I suggest we do this together. my github user name is also my wechat account, hope we can come together, share some ideas. <issue_comment>username_1: Hi, are you still looking for an antd json schema form. I published one recently https://github.com/gravel-form/antd-form [Here](https://gravel-form.github.io/antd-form/basic) is the playground. It's not base on rjsf. I'm using a middleware approach for the [core](https://github.com/gravel-form/gravel), which I think would be more flexible then rjsf.
{'fraction_non_alphanumeric': 0.10101010101010101, 'fraction_numerical': 0.015873015873015872, 'mean_word_length': 4.735537190082645, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 4, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '22799951', 'n_tokens_mistral': 216, 'n_tokens_neox': 206, 'n_words': 80}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: elastix 5.0 different results and twice as slow on iteration steps with respect to elastix 4.9 using bspline transfrom and advancedmattesmutualinformation metric username_0: Hi everyone, I just tried out the new elastix 5.0 version vs the previously installed elastix 4.9 version on the same data with the same parameter file performing a bspline transformation using AdvancedMattisMutualInformation metric and the calculation takes about twice as long as before, mainly due to iteration steps taking twice as long. Also when applying the resulting TransformParameters file on the same inputfile, the output (created by transformix) is quite different from elastix 5.0 with respect to 4.9. Especially strange to me is that the inputpoint position that was formerly exactly matching the inputpoint index is now totally off, before only the output points had deformations applied to it. I tried with precompiled as well as self-compiled version on my original Ubuntu 16 machine and on another Ubuntu 17 machine, speed always same slow. Interestingly using a precompiled elastix 4.9 version on the Ubuntu 17 machine, the calculation time was exactly as slow as using elastix 5.0 on the same machine, while on my original Ubuntu 16 machine elastix 4.9 was performing twice as fast. But the result of elastix 4.9 on both machines was the same, just taking longer on Ubuntu 17, while actually using more CPU power (almost twice as much). Elastix 5.0 is using less than half of the CPU as elastix 4.9 on both machines. If I use the option (UseMultiThreadingForMetrics "false"), elastix 5.0 actually performs with the same fast speed as elastix 4.9 on the original Ubuntu 16 machine, while still yielding utterly different results with respect to the transformation, metric values are quite similar. Is there maybe a new offset for the indices of an image in elastix 5.0 or what am I missing? Any ideas on what might be causing this? Please tell me if you need more information. Best regards and thanks in advance ! <issue_comment>username_1: We fixed some performance issues in the ITK related to this. We think it is solved in the develop branch with the latest of ITK. Would you be able to confirm? <issue_comment>username_0: In my case the performance does not seem to have significantly improved, as elastix-5.0-develop is still about 2 times slower than elastix 4.9, BUT the results are not completely off anymore, they now only show small deviations between both versions.
{'fraction_non_alphanumeric': 0.028174603174603175, 'fraction_numerical': 0.01984126984126984, 'mean_word_length': 4.639821029082774, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '18988578', 'n_tokens_mistral': 648, 'n_tokens_neox': 574, 'n_words': 400}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Choose update channel main/sub username_0: Is there a way to choose between the update channels? http://ark.gamepedia.com/Web_API#version main: 223 sub: 223.3 something like this Often the sub updates break the game^^ so I would only like to update to the main version automatically at the cronjob. <issue_comment>username_1: Unfortunately it isn't possible to update to anything other than the latest version using Steam. In fact, the return from http://arkdedicated.com/version is out of date, as it shows the latest version being `223.2`, yet https://steamcommunity.com/app/346110/discussions/0/594820656447032287/ says the latest version is `223.3` <issue_comment>username_2: @username_0 @username_1 The ``http://arkdedicated.com/version`` is very often out of date and it can take long time before it is "up to date" again. That's why I think it should not be used to determine the latest version. <issue_comment>username_0: Ok i understand<issue_closed>
{'fraction_non_alphanumeric': 0.07348560079443893, 'fraction_numerical': 0.04568023833167825, 'mean_word_length': 4.826589595375722, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '10324602', 'n_tokens_mistral': 316, 'n_tokens_neox': 282, 'n_words': 130}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: how to delete org.pqrs.Karabiner-DriverKit-VirtualHIDDevice username_0: I had uninstall Karabiner-Elements.But when I check process, I found org.pqrs.Karabiner-DriverKit-VirtualHIDDevice still running. I can't kill it. <issue_comment>username_1: The process is still running even if you restart macOS? <issue_comment>username_0: Yes. Although I use "sudo kill -9 pid", it will restart with a new pid. <issue_comment>username_0: Yes. Although I use "sudo kill -9 pid", it will restart with a new pid. <issue_comment>username_1: Thank you for confirmation! I hope retrying the uninstallation of DriverKit driver solves the issue. Please follow the steps: 1. Reinstall Karabiner-Elements to install DriverKit driver manager to uninstall it. 2. Execute the following command that is described in this page. https://karabiner-elements.pqrs.org/docs/manual/operation/uninstall/ ```shell bash '/Library/Application Support/org.pqrs/Karabiner-DriverKit-VirtualHIDDevice/scripts/uninstall/deactivate_driver.sh' ``` 3. Confirm `org.pqrs.Karabiner-DriverKit-VirtualHIDDevice` is killed. 4. Execute the following command. ```shell sudo '/Library/Application Support/org.pqrs/Karabiner-Elements/uninstall.sh' ``` <issue_comment>username_0: Crash...When I try to commit `bash '/Library/Application Support/org.pqrs/Karabiner-DriverKit-VirtualHIDDevice/scripts/uninstall/deactivate_driver.sh'` this command, my system immediately restart with pink screen.Although I use the GUI to uninstall, it gets the same result. <issue_comment>username_1: Hmm, it's an annoying macOS issue... Until Apple fixes the system crash, the only way to solve the problem, though tedious, is to follow the steps: 1. Disable SIP to prepare to use `systemextensionsctl reset`. https://developer.apple.com/documentation/security/disabling_and_enabling_system_integrity_protection 2. Execute `systemextensionsctl reset`. This command purges installed DriverKit driver. 3. Enable SIP again. 4. Uninstall Karabiner-Elements.<issue_closed> <issue_comment>username_0: It works.Thanks.
{'fraction_non_alphanumeric': 0.08243052284503062, 'fraction_numerical': 0.008478568064060292, 'mean_word_length': 4.867403314917127, 'pattern_counts': {'":': 0, '<': 10, '<?xml version=': 0, '>': 10, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '24715372', 'n_tokens_mistral': 659, 'n_tokens_neox': 614, 'n_words': 213}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: fakessh.Process.wake() can never return on Python < 2.7 username_0: <issue_comment>username_0: I see two possible fixes for this 1. Don't specify a timeout, and remove the while loop. 1. Something like ```python # NB: Before Python 2.7 threading.Event.wait() always returns None. # On Python 2.7+ it returns True, unless the timeout is reached. while not self.wake_event.wait(0.1) and not self.wake_event.isSet(): pass ```<issue_closed> <issue_comment>username_1: Well spotted! Thanks again :)
{'fraction_non_alphanumeric': 0.09437086092715231, 'fraction_numerical': 0.02152317880794702, 'mean_word_length': 2.78125, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '26971896', 'n_tokens_mistral': 187, 'n_tokens_neox': 177, 'n_words': 63}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Issues with building username_0: ### The Problem When I am trying to run `npm run build:dev`, I get the following error: ```js internal/modules/cjs/loader.js:960 throw err; ^ Error: Cannot find module 'chalk' ``` My temporary fix was to do `npm install chalk`, which fixed that error. Now it would run but I am getting a new error. ```node copying session node-modules... Error: ENOENT: no such file or directory, link '/Users/aedan/Desktop/adobereact/src/session-src/node_modules' -> '/Users/aedan/Desktop/adobereact/dist/com.hendrix.demo/node_modules' ``` If anyone can explain this that would be great. Thanks. ### Info OS: Mac OSX Date of Git Clone: 7/25/2020 <issue_comment>username_1: hi it means you did not install the package. you should have done `npm install` after download <issue_comment>username_0: Ok. Thanks. You should write actual instructions for installing then and make sure everything on the README is clear so that it is easy for anyone to understand.<issue_closed>
{'fraction_non_alphanumeric': 0.08876298394711993, 'fraction_numerical': 0.012275731822474031, 'mean_word_length': 4.0, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 6, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 1, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '24752374', 'n_tokens_mistral': 346, 'n_tokens_neox': 319, 'n_words': 135}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Make FtcRobotController module a proper AAR username_0: _Before issuing a pull request, please see the contributing page._ _Yes, I understand Git and GitHub and meant to open this pull request._ Hello! this is to simply reorganize modifications into one central module 'TeamCode'. <issue_comment>username_1: The ability to modify the main screen of the app is very useful, and something that a lot of teams have done. This change would prevent them from doing that (unless they use the OpenFTC project, of which I'm the founder). <issue_comment>username_0: Modifying the main screen of the app is easily done with OpModes using the hardwareMap (look at SensorMRColor). <issue_comment>username_1: To an extent, that's true. You can modify things about the elements that are there, and you could even programmatically add new elements to the RelativeLayout if you wanted to. But that's so much more of a pain than editing the XML directly. More importantly, it only lets you change visible stuff, and it only lets you change it while an OpMode is running. You can't change the actual behavior of the activity at all, and you can't modify the UI that you see when you start the app.
{'fraction_non_alphanumeric': 0.03855619360131255, 'fraction_numerical': 0.003281378178835111, 'mean_word_length': 4.809523809523809, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '16444869', 'n_tokens_mistral': 304, 'n_tokens_neox': 292, 'n_words': 194}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Allowing to use a genome not in igenomes username_0: ``` _(edited by @ewels to split arguments onto their own lines)_ Minimal inputs for this pipeline can be found at http://genoweb.toulouse.inra.fr/~username_0/nf-core/smrnaseq/ However I am still getting an error: ``` No such variable: reference_genome -- Check script '/work2/project/fragencode/workspace/username_0/geneswitch/pipelines/srnaseq/smrnaseq/main.nf' at line: 830 or see '.nextflow.log' file for more details ``` #### Ensure the test suite passes I have tried it but `nextflow run . -profile test,docker` returns `Cannot enable more than one container engine -- Choose either one of: docker, singularity` #### Make sure your code lints do not know where to find `nf-core` executable sorry Please fill in the appropriate checklist below (delete whatever is not relevant). These are the most common things requested on pull requests (PRs). ## PR checklist - [x] This comment contains a description of changes (with reason) - [x] If you've fixed a bug or added code that should be tested, add tests! - [ ] If necessary, also make a PR on the [nf-core/smrnaseq branch on the nf-core/test-datasets repo]( https://github.com/nf-core/test-datasets/pull/new/nf-core/smrnaseq) - [ ] Ensure the test suite passes (`nextflow run . -profile test,docker`). - [ ] Make sure your code lints (`nf-core lint .`). - [ ] Documentation in `docs` is updated - [ ] `CHANGELOG.md` is updated - [ ] `README.md` is updated **Learn more about contributing:** https://github.com/nf-core/smrnaseq/tree/master/.github/CONTRIBUTING.md <issue_comment>username_1: this PR was addressed in #65, should we close this? or there is another feature/bug here? Thanks! <issue_comment>username_0: yes this can be closed thanks a lot Lorena -- <NAME> - PhD - CR INSERM Iron metabolism genetics and regulation IRSD INSERM U1220 - Bat B - CHU Purpan Place du Docteur Baylac - CS 60039 31024 Toulouse Cedex 3 +33 5 62 74 45 08 sarah dot djebali at inserm dot fr
{'fraction_non_alphanumeric': 0.09503135552339605, 'fraction_numerical': 0.017848528702363725, 'mean_word_length': 3.7351598173515983, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '11463584', 'n_tokens_mistral': 707, 'n_tokens_neox': 635, 'n_words': 261}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: [Request] Compare arrays username_0: I'd love an array compare function. Here is one based on a popular [SO post](https://stackoverflow.com/questions/7837456/how-to-compare-arrays-in-javascript). ```js export default function arraysEqual (array, array2) { // compare lengths - can save a lot of time if (array.length != array2.length) return false for (var i = 0, l=array.length; i < l; i++) { // Check if we have nested arrays if (array[i] instanceof Array && array2[i] instanceof Array) { // recurse into the nested arrays if (!arraysEqual(array[i], array2[i])) return false } else if (array[i] != array2[i]) { // Warning - two different object instances will never be equal: {x:20} != {x:20} return false } } return true } ``` <issue_comment>username_1: How it differs from [equals](https://github.com/nanoutils/nanoutils/blob/master/lib/equals/equals.js) method? Excepting that it's faster 🤔<issue_closed> <issue_comment>username_0: Oh I was searching for "compare" and "array" I missed this one!
{'fraction_non_alphanumeric': 0.1248868778280543, 'fraction_numerical': 0.01809954751131222, 'mean_word_length': 3.551440329218107, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 5, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '24265799', 'n_tokens_mistral': 361, 'n_tokens_neox': 338, 'n_words': 115}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: How to access stage from non-workbench class? username_0: For example, to set localized stage title? <issue_comment>username_1: Hi, sorry but the stage will not be provided outside the workbench. Create your own resource bundle in the workbench and localize the stage. Andy <issue_comment>username_0: ok, thanks. for curiosity sake what is the reason that stage not available in ui components? There are ui classes which expect stage instanse, for example FileChooser. <issue_comment>username_1: Hi, you are right. I must admit that I never used FileChooser and no one asked for it. The idea behind "hiding the stage" was simply that one can do too many dirty things with the node hierachie. I'll add a way to access the stage in 2.1. Andy<issue_closed> <issue_comment>username_0: Thanks, that makes sense. <issue_comment>username_0: OK, while this feature is not released yet, I have sorted it in following way: 1. create singleton spring component, eg Placeholder, with stage property 2. reference it in workbench, and assign stage to it 3. since it is spring component one can link to any jacp component or fragment and use stage there <issue_comment>username_0: For example, to set localized stage title? <issue_comment>username_2: Hello, is this still work in progress for a future release? I like JacpFX very much, getting access to the stage out-of-the-box would be a nice addition for what I want to do. <issue_comment>username_1: Hi, yes it is... currently I do some cleanup/ internal rewrite. When it is finished I will go through all open issues. THX Andy
{'fraction_non_alphanumeric': 0.049474335188620905, 'fraction_numerical': 0.008658008658008658, 'mean_word_length': 4.637630662020906, 'pattern_counts': {'":': 0, '<': 11, '<?xml version=': 0, '>': 11, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '30256075', 'n_tokens_mistral': 439, 'n_tokens_neox': 429, 'n_words': 247}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: dirPagination: multiple instances in a page username_0: Hello, I've been experiencing issues with dir-paginate and multiple instances in the same page. Basically I have the same example of https://github.com/username_1/angularUtils/tree/master/src/directives/pagination#multiple-instances-with-ngrepeat with the following changes: ```html <div class='row' data-ng-repeat='section in results'> ... code ... <ul class='list-group'> <li class='list-group-item' dir-paginate="test in section.tests | itemsPerPage: 10" pagination-id="section._id.fingerprint_id"> ... complex code with tables and inner ng-repeat ... </li> </ul> <dir-pagination-controls pagination-id="section._id.fingerprint_id"></dir-pagination-controls> </div> ``` The error I have is: https://docs.angularjs.org/error/$parse/syntax?p0=c7f197caa6208e290515a5120005c199eeabaa__currentPage&p1=is%20an%20unexpected%20token&p2=3&p3=60c7f197caa6208e290515a5120005c199eeabaa__currentPage&p4=c7f197caa6208e290515a5120005c199eeabaa__currentPage%20at%20Error%20(native) The fingerprint is somehow *cut* at the beginning. Why? That's odd because just the last pagination works. In other examples I have the first and the last pagination that work correctly. What am I doing wrong? <issue_comment>username_0: All right, after some investigation, it seems the $parse function is bothered by strings that start with a number. ```javascript parse('a123a') function (s,l,a,i){var v5,v6=l&&('a123a' in l);if(!(v6)){if(s){v5=s.a123a;}}else{v5=l.a123a;}return v5;} parse('123a') angular.js:38 Uncaught Error: [$parse:syntax] http://errors.angularjs.org/1.4.0-beta.6/$parse/syntax?p0=a&p1=is%20an%20unexpected%20token&p2=4&p3=123a&p4=a ``` Maybe some stronger check would be better in that function. <issue_comment>username_0: I turns out I have found a solution (workaround). I generate the pagination id in the following way ```pagination-id = "'_' + section._id.fingerprint_id"``` So I force the id to start with a character rather than a number. Please consider revisiting this part. Cheers <issue_comment>username_1: Hi, thanks for the detailed reports. I just tested this out with `pagination-id="myId"` and `$scope.myId = "123a"`, and it worked okay. Can you confirm that you are using the latest version of the code (v0.6.0 if using Bower)? If you still get the error on the latest version, are you able to recreate it in a plunk (you can form this one: http://plnkr.co/edit/Wtkv71LIqUR4OhzhgpqL?p=preview)? Cheers. <issue_comment>username_0: Here the plunk: http://plnkr.co/edit/pS37d24Tn4NgvPuNBfc3?p=preview Ok, what I'm trying to do is slightly different: my id's name is the value of some variable generated in an external ng-repeat while you're setting the value of a fixed variable in a scope. Obviously in js we cannot have variable names which start with a number, hence the parse error.<issue_closed> <issue_comment>username_1: Ah I see. Yeah I ran into a similar issue recently with ids containing a hyphen. Comes down to the same problem - identifiers in Angular expressions must be valid JavaScript identifiers too. This isn't an issue specific to this module, so I'll close this now. <issue_comment>username_0: Yep, I read about that. I would suggest to implement some soft of sanitation of the variable name (start with a character, replace hyphen with underscore, etc) so users can use it without worrying about it as the pagination-id can be generated starting from user data which might contain numbers, hyphens and other not valid characters. Anyway, great module! Congrats and thanks!
{'fraction_non_alphanumeric': 0.0914164230667021, 'fraction_numerical': 0.04225352112676056, 'mean_word_length': 4.286516853932584, 'pattern_counts': {'":': 0, '<': 17, '<?xml version=': 0, '>': 17, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 2, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '6483383', 'n_tokens_mistral': 1249, 'n_tokens_neox': 1137, 'n_words': 421}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Asynchronous service shutdown username_0: This PR introduces asynchronous service shutdown. It is a preliminary to https://github.com/habitat-sh/habitat/issues/2447. Please read the commit messages for much more detail. <issue_comment>username_1: Thanks for the pull request! Here is what will happen next: 1. Your PR will be reviewed by the maintainers 2. If everything looks good, one of them will `approve` it, and your PR will be merged. Thank you for contributing! <issue_comment>username_0: Here's some tips for how you might manually evaluate this PR: # Running the Supervisor I've enabled some tracing here, but you can modify that as you like. The important thing is to use all new binaries. Change paths as appropriate. ```sh sudo RUST_LOG=habitat_sup::manager::service::spec=trace,habitat_sup::manager::spec_watcher=trace \ HAB_LAUNCH_BINARY=/home/maier/src/habitat-sh/habitat/target/debug/hab-launch \ HAB_SUP_BINARY=/home/maier/src/habitat-sh/habitat/target/debug/hab-sup \ HAB_BINARY=/home/maier/src/habitat-sh/habitat/target/debug/hab \ hab sup run ``` # Proving Shutdown is Asynchronous In a separate terminal, run the following: ```sh watch -n1 sudo /home/maier/src/habitat-sh/habitat/target/debug/hab svc status ``` As you stop various services, you will observe that the status command continues to run, and never hangs. Compare and contrast with the current stable Supervisor. # Simple Service Shutdown Load up username_0/test-probe/0.1.9, and then stop it. This package has a `post-stop` hook that takes 15 seconds to run. Were this to run on a stable Supervisor, it would block the Supervisor from doing anything else for that amount of time. ```sh hab svc load username_0/test-probe/0.1.9 # let it start up hab svc stop username_0/test-probe ``` Observe that you can run `hab svc status`, or `hab svc load core/redis`, or anything else you care to do while the service is shutting down. If you run `hab svc unload username_0/test-probe` you should see the same behavior. # Supervisor Shutdown Load several services: ```sh hab svc load core/redis hab svc load username_0/test-probe/0.1.9 ``` Allow them to start up, then kill the Supervisor, either via `^C` or `hab sup term`. Observe the orderly shutdown of all services, including the running of the `post-stop` hook. # Control Gateway Shutdown While waiting for the Supervisor to shut down, observe that running any other commands (`hab svc status`, `hab svc load core/traefik`, etc.) will not run, and will be met with a message explaining that the Supervisor is shutting down. ```sh hab svc load username_0/test-probe/0.1.9 # wait for it to start hab sup term # wait 2-3 seconds hab svc load core/traefik ``` Alternatively, if you are running `hab svc status` in a loop, you'll see the message without having to do anything else, since the status command is serviced by the control gateway. You *must* use the new `hab` binary to see the message, though. # Demonstration of "Busy" Services Once `username_0/test-probe` is running, stop it. Wait until it has started it's post-run hook, and then start it up again. Observe that the Supervisor waits until the service has completely shut down before starting it back up again. ```sh hab svc load username_0/test-probe/0.1.9 # wait for it to start hab svc stop username_0/test-probe # wait 3-4 seconds hab svc start username_0/test-probe ``` # Service Updates The current stable version of `username_0/test-probe` is 0.1.9. Since you don't have access to my origin, and thus can't promote and demote packages at will, you can simulate an update this way. ```sh hab pkg uninstall username_0/test-probe/0.1.9 hab pkg install username_0/test-probe/0.1.8 hab svc load username_0/test-probe --strategy=at-once ``` The Supervisor will see that there's a new version of `test-probe` and restart it for you. If you've been running `hab svc status` in a loop, you'll see this all happen in realtime. <issue_comment>username_2: ![](https://media.giphy.com/media/9VrIWFdCG8Ynhexwc2/giphy.gif) <issue_comment>username_3: I believe we will need to add a new supervisor version requirement in this new launcher. If I try to run this launcher with 0.74, I get `Timeout exceeded waiting for IPC connection` and the supervisor never starts. <issue_comment>username_0: Force pushed a rebase to deal with some merge conflicts. <issue_comment>username_0: @username_4 I think for now I'd like to leave this as an AtomicBool-based approach, rather than a channel-based one, though you are correct in observing that the "shape" of the problem is rather channel-esque. :smile: For our immediate purposes, however, the AtomicBool acts like a channel with a capacity of one message (and all the producers add the same message, and the presence of one message or 50 is exactly the same). Furthermore, this approach doesn't carry much additional complexity at all. In the future, as we start doing more operations asynchronously, our needs may dictate something more involved. If and when such a time comes, this `ReconciliationFlag` type should provide a decent interface behind which we can manage that migration. <issue_comment>username_4: I can't really agree with that assessment. The approach is very subtle and easy to break with seemingly innocent refactoring. Functions called inside conditions don't usually have side-effects that would cause unsoundness if moved outside the condition. It's not a *lot* of code, and the concept is simple, but the edge cases are important, yet easy to miss. Imagine someone reading this code for the first time who doesn't have all the same context.
{'fraction_non_alphanumeric': 0.06757227446882619, 'fraction_numerical': 0.011842563566701497, 'mean_word_length': 4.169216921692169, 'pattern_counts': {'":': 0, '<': 9, '<?xml version=': 0, '>': 9, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '2391826', 'n_tokens_mistral': 1712, 'n_tokens_neox': 1604, 'n_words': 800}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: tree-wide: add missing parent icon themes username_0: <!-- Nixpkgs has a lot of new incoming Pull Requests, but not enough people to review this constant stream. Even if you aren't a committer, we would appreciate reviews of other PRs, especially simple ones like package updates. Just testing the relevant package/service and leaving a comment saying what you tested, how you tested it and whether it worked would be great. List of open PRs: <https://github.com/NixOS/nixpkgs/pulls>, for more about reviewing contributions: <https://hydra.nixos.org/job/nixpkgs/trunk/manual/latest/download/1/nixpkgs/manual.html#chap-reviewing-contributions>. Reviewing isn't mandatory, but it would help out a lot and reduce the average time-to-merge for all of us. Thanks a lot if you do! --> ###### Motivation for this change Icon themes may inherit from other icon themes. The inheritance is specified using the `Inherits` key in the `index.theme` file. According to the [icon theme specification](https://specifications.freedesktop.org/icon-theme-spec/icon-theme-spec-latest.html) that means that icons not provided by the theme are looked for in its parents. The idea is that missing icons are looked for in the parent icon themes, in the order specified. Therefore the parent themes should be installed as dependencies for a more complete experience regarding the icon sets used. ###### Things done <!-- Please check what applies. Note that these are not hard requirements but merely serve as information for reviewers. --> - [x] Tested using sandboxing ([nix.useSandbox](http://nixos.org/nixos/manual/options.html#opt-nix.useSandbox) on NixOS, or option `sandbox` in [`nix.conf`](http://nixos.org/nix/manual/#sec-conf-file) on non-NixOS linux) - Built on platform(s) - [x] NixOS - [ ] macOS - [ ] other Linux distributions - [ ] Tested via one or more NixOS test(s) if existing and applicable for the change (look inside [nixos/tests](https://github.com/NixOS/nixpkgs/blob/master/nixos/tests)) - [ ] Tested compilation of all pkgs that depend on this change using `nix-shell -p nixpkgs-review --run "nixpkgs-review wip"` - [ ] Tested execution of all binary files (usually in `./result/bin/`) - [ ] Determined the impact on package closure size (by running `nix path-info -S` before and after) - [ ] Ensured that relevant documentation is up to date - [x] Fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md). <issue_comment>username_0: I have tested with `propagatedBuildInputs` and with `propagatedUserEnvPkgs` in a VM configured with the `startx` display manager and the `fluxbox` window manager. I have listed the files in `/run/current-system/sw/share/icons`. I have also looked at the themes with the application `lxappearance`. The only theme explicitly installed was `papirus-maia-icon-theme` (using `environment.systemPackages`). Only with `propagatedUserEnvPkgs` the parent themes were made available automatically. The following screenshots illustrates the results. With `propagatedBuildInputs`: ![Screenshot_2020-04-14_20-47-56](https://user-images.githubusercontent.com/1217934/79284985-90547300-7e92-11ea-8941-b8200cbc7130.png) With `propagatedUserEnvPkgs`: ![Screenshot_2020-04-14_20-51-47](https://user-images.githubusercontent.com/1217934/79284979-8a5e9200-7e92-11ea-84e6-8b66e2c92956.png) I think this is conclusive. <issue_comment>username_1: I think it might be preferable (even though ugly) to symlink the inherited themes into the output. Relying on `propagatedUserEnvPkgs` is just too big of a violation of encapsulation and purity for my taste. <issue_comment>username_2: If symlink works, than this should be the preferred solution. In future a setup-hook that scans `index.theme` performs the symlink on its own might be the way to go. <issue_comment>username_3: Won't this result in collisions if the parent theme is installed separately? (That might be fine, they're the same content after all). <issue_comment>username_0: Testing with symbolic links in `papirus-maia-icon-theme` with both `papirus-maia-icon-theme` and `breeze-icons` in `environment.systemPackages`: ![Screenshot_2020-04-15_09-02-11](https://user-images.githubusercontent.com/1217934/79335552-95e4a400-7ef8-11ea-93f1-672dc6daa721.png) <details><summary>Collisions are reported at build time:</summary> ``` $ nixos-rebuild build-vm --show-trace --option sandbox true -I nixpkgs=/alt/nixpkgs -I nixos-config=/alt/nixos/configuration.fluxbox.nix 2>&1 | tee /var/tmp/nixos-rebuild-vm.log building Nix... building the system configuration... these derivations will be built: /nix/store/1cspvw79b958fsw7073d2cxw55ylj6bc-system-path.drv [...] building '/nix/store/1cspvw79b958fsw7073d2cxw55ylj6bc-system-path.drv'... collision between `/nix/store/bq8hnk8n1xyy18nmwd7mf78kirs2s0d9-breeze-icons-5.68.0/share/icons/breeze-dark/actions/16@2x/stateshape.svg' and `/nix/store/89bsafcnz0a6r1wfaiifvmsnbh6ccnxz-papirus-maia-icon-theme-2019-07-26/share/icons/breeze-dark/actions/16@2x/stateshape.svg' collision between `/nix/store/bq8hnk8n1xyy18nmwd7mf78kirs2s0d9-breeze-icons-5.68.0/share/icons/breeze-dark/actions/16@2x/dialog-object-properties.svg' and `/nix/store/89bsafcnz0a6r1wfaiifvmsnbh6ccnxz-papirus-maia-icon-theme-2019-07-26/share/icons/breeze-dark/actions/16@2x/dialog-object-properties.svg' collision between `/nix/store/bq8hnk8n1xyy18nmwd7mf78kirs2s0d9-breeze-icons-5.68.0/share/icons/breeze-dark/actions/16@2x/document-save-all.svg' and `/nix/store/89bsafcnz0a6r1wfaiifvmsnbh6ccnxz-papirus-maia-icon-theme-2019-07-26/share/icons/breeze-dark/actions/16@2x/document-save-all.svg' collision between `/nix/store/bq8hnk8n1xyy18nmwd7mf78kirs2s0d9-breeze-icons-5.68.0/share/icons/breeze-dark/actions/16@2x/kruler-south.svg' and `/nix/store/89bsafcnz0a6r1wfaiifvmsnbh6ccnxz-papirus-maia-icon-theme-2019-07-26/share/icons/breeze-dark/actions/16@2x/kruler-south.svg' collision between `/nix/store/bq8hnk8n1xyy18nmwd7mf78kirs2s0d9-breeze-icons-5.68.0/share/icons/breeze-dark/actions/16@2x/highlight-pointer-spot.svg' and `/nix/store/89bsafcnz0a6r1wfaiifvmsnbh6ccnxz-papirus-maia-icon-theme-2019-07-26/share/icons/breeze-dark/actions/16@2x/highlight-pointer-spot.svg' [...] collision between `/nix/store/bq8hnk8n1xyy18nmwd7mf78kirs2s0d9-breeze-icons-5.68.0/share/icons/breeze/places/64/folder-downloads.svg' and `/nix/store/89bsafcnz0a6r1wfaiifvmsnbh6ccnxz-papirus-maia-icon-theme-2019-07-26/share/icons/breeze/places/64/folder-downloads.svg' created 39920 symlinks in user environment gtk-update-icon-cache: Cache file created successfully. gtk-update-icon-cache: Cache file created successfully. gtk-update-icon-cache: Cache file created successfully. building '/nix/store/ylzwcnfziq0sh9h2cdcxdxq89s5i4app-dbus-1.drv'... building '/nix/store/0ggf9zbx6ackvk98fqkyr4ccyy1lx747-unit-polkit.service.drv'... building '/nix/store/q6n9wsd55dvpqnr3d3xh7fj1jprf8pzm-unit-systemd-fsck-.service.drv'... building '/nix/store/93i3jmhp60v3xxavm22fn8d571ljy4dz-unit-dbus.service.drv'... building '/nix/store/scjf25hizm5c8lyl9fdbsjnaaxclgxyx-unit-dbus.service.drv'... building '/nix/store/f73af23ffjps3iw9l7m48k39fqm8dqir-system-units.drv'... building '/nix/store/kwnp8n3jczl3wmim25mixbl178478xa8-user-units.drv'... building '/nix/store/44rvwlz38pjra1bryds6qr0dy0svmlk0-etc.drv'... building '/nix/store/1crsvb182anj0b9was6bqgdjxqs1yv9v-nixos-system-nixos-20.09pre-git.drv'... building '/nix/store/rg3qf72c3xcvzygc16050jf68q7rrfsw-closure-info.drv'... building '/nix/store/zxzpd7zcpmskib7yjcg3xi1h0havalkw-run-nixos-vm.drv'... building '/nix/store/afqj6dvskdb2hf0962yiz76qv6mkm0mq-nixos-vm.drv'... Done. The virtual machine can be started by running /nix/store/088zz2r26xw8vj61kmrrcfcp3h173vzn-nixos-vm/bin/run-nixos-vm ``` </details> <issue_comment>username_0: Should I go on and implement this solution for all icon themes? <issue_comment>username_0: Using symlinks has one complexity: indirect inheritance. It would be needed to symlink also the parent themes from the parents, and so on, up to the inheritance root. This is not implemented yet in this PR. Also I am not liking the number of collisions. <issue_comment>username_0: cc @username_4 <issue_comment>username_0: Now there is a setup hook in `hicolor-icon-themes` that, for each installed icon theme, finds its parent themes by looking at `theme.index`. Then each parent theme is searched among the dependencies. If found the theme directory is linked at `$out/share/icons`. The dependencies providing parent icon themes should be propagated. It works for indirect inheritance. <issue_comment>username_4: These collision warnings have always been annoying, :smile: even if this introduced some. I don't think there is a way to avoid that, unless we just make it be quiet in general. <issue_comment>username_0: Fixed merge conflicts. <issue_comment>username_4: This is approved @username_0 but we need to target staging since this appears to be a mass rebuild. <issue_comment>username_0: @username_4 Targeting staging introduces some file conflicts. I suppose I have to fix them. Any direction on how to do it? <issue_comment>username_5: This pull request has been mentioned on **NixOS Discourse**. There might be relevant details there: https://discourse.nixos.org/t/proliferation-of-symbolic-links-in-run-current-system-sw-share-icons/7587/1
{'fraction_non_alphanumeric': 0.1058748943364328, 'fraction_numerical': 0.058431952662721894, 'mean_word_length': 4.810313075506445, 'pattern_counts': {'":': 0, '<': 24, '<?xml version=': 0, '>': 25, 'https://': 9, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 3, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '18141300', 'n_tokens_mistral': 3689, 'n_tokens_neox': 3320, 'n_words': 811}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Sass files username_0: Hi Is there a way to add a sass preprocessor to this? I've tried using sassr and the following command: `watchify -t [ babelify --presets [ react ] sassr ] app.jsx -o bundle.js` But it doesn't seem to do anything<issue_closed>
{'fraction_non_alphanumeric': 0.08191126279863481, 'fraction_numerical': 0.0034129692832764505, 'mean_word_length': 3.983050847457627, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '14052984', 'n_tokens_mistral': 102, 'n_tokens_neox': 94, 'n_words': 40}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: expose list-delim option for simplifying huge css changes username_0: Please read the commit message for more information. I'll be upfront that, although it builds, I haven't actually tested the code, as there's more to it than just a change in the backend library. In our case I believe we're going to need changes to `gulp-sass` as well. Is this a change that you'd consider finishing off, provided the use-case? Some example commits where this would be useful: https://github.com/cuckoosandbox/cuckoo/commit/cb7fda22c38e244e1e6bccc1ee2f9d4e8008c21c https://github.com/cuckoosandbox/cuckoo/commit/cc89f90443a3c3e6337bf8d4043f88456a3b0886 https://github.com/cuckoosandbox/cuckoo/commit/02b95ca2a70ec59a8ecbc2a61123ab85f5aedc4e. <issue_comment>username_1: Thanks for taking to time to contribute. However I do not believe LibSass is the right place for this change. If anything we may be reducing output options. There are just way too many edge cases to handle when dealing with output generation. Additionally due to how sass-spec works this isn't a feature we can reasonably test, and such guarantee to maintain. Formatting operations are much easier to do as a post-process on the generated CSS. Since you're already using gulp I recommended using a post-css backed formatter. <issue_comment>username_0: Fair enough. Would you by any chance know of a starting point for such CSS formatter in Gulp? I just ran into `postcss`, but didn't see anything in this direction in there. I'm personally not all that familiar with the tools out there in this area. <issue_comment>username_1: PostCSS is a generic CSS parser and AST. There's an huge ecosystem of tools built upon it. [CSSComb][1] or [stylelint][2] might be a good starting points. [1]: http://csscomb.com/config [2]: https://stylelint.io/ <issue_comment>username_0: Oh, sweet, thanks! https://cuckoo.sh/ebf1b499512a098a/screenshot.png Best of luck with the rest of the project.
{'fraction_non_alphanumeric': 0.06168505516549649, 'fraction_numerical': 0.043630892678034106, 'mean_word_length': 4.782608695652174, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 5, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '10257778', 'n_tokens_mistral': 640, 'n_tokens_neox': 584, 'n_words': 262}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: compile error when structopt is used in namespace username_0: Hello and first of all, thanks for this nice piece of software ! When `STRUCTOPT(...)` is used inside a namespace, I encounter the following error: ``` /home/username_0/dev/zap/build/root/include/structopt/third_party/visit_struct/visit_struct.hpp:839:22: error: ‘visitable’ is not a class template 839 | template <> struct visitable<STRUCT_NAME, void> { \ | ^~~~~~~~~ /home/username_0/dev/zap/build/root/include/structopt/app.hpp:13:19: note: in expansion of macro ‘VISITABLE_STRUCT’ 13 | #define STRUCTOPT VISITABLE_STRUCT | ^~~~~~~~~~~~~~~~ so.cpp:16:1: note: in expansion of macro ‘STRUCTOPT’ 16 | STRUCTOPT(Options, config_file, bind_address, verbose, user, files); | ^~~~~~~~~ /home/username_0/dev/zap/build/root/include/structopt/third_party/visit_struct/visit_struct.hpp:839:51: error: explicit specialization of non-template ‘foo::visit_struct::traits::visitable’ 839 | template <> struct visitable<STRUCT_NAME, void> { ``` Here's the minimal example I used, compiled with `gcc (Debian 10.2.1-6) 10.2.1 20210110`: ``` g++ -Wall -std=c++20 -o test test.cpp ``` ```c++ #include <iostream> #include <optional> #include <vector> #include <structopt/app.hpp> namespace foo { struct Options { std::string config_file; std::optional<std::string> bind_address; std::optional<bool> verbose = false; std::optional<std::pair<std::string, std::string>> user; std::vector<std::string> files; }; STRUCTOPT(Options, config_file, bind_address, verbose, user, files); } int main(int argc, char *argv[]) { try { auto options = structopt::app("my_app").parse<foo::Options>(argc, argv); std::cout << "config_file = " << options.config_file << "\n"; } catch (structopt::exception& e) { std::cout << e.what() << "\n"; std::cout << e.help(); } } ``` If I remove `namespace foo { ... }`, it compiles without error. Could you please have a look ? Thank you ! <issue_comment>username_1: Move the `STRUCTOPT` macro usage outside the namespace and it'll work. ```cpp #include <iostream> #include <optional> #include <vector> #include <structopt/app.hpp> namespace foo { struct Options { std::string config_file; std::optional<std::string> bind_address; std::optional<bool> verbose = false; std::optional<std::pair<std::string, std::string>> user; std::vector<std::string> files; }; } STRUCTOPT(foo::Options, config_file, bind_address, verbose, user, files); int main(int argc, char *argv[]) { try { auto options = structopt::app("my_app").parse<foo::Options>(argc, argv); std::cout << "config_file = " << options.config_file << "\n"; } catch (structopt::exception& e) { std::cout << e.what() << "\n"; std::cout << e.help(); } } ``` The above compiles fine. ```console $ g++ -I. -std=c++17 -o main main.cpp $ ./main -h USAGE: my_app [FLAGS] [OPTIONS] config_file files FLAGS: -v, --verbose OPTIONS: -b, --bind-address <bind_address> -u, --user <user> -h, --help <help> -v, --version <version> ARGS: config_file files ```<issue_closed> <issue_comment>username_0: Ok. Thanks for you help !
{'fraction_non_alphanumeric': 0.1597080291970803, 'fraction_numerical': 0.015766423357664233, 'mean_word_length': 3.22962962962963, 'pattern_counts': {'":': 0, '<': 57, '<?xml version=': 0, '>': 33, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 1, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '12250812', 'n_tokens_mistral': 1215, 'n_tokens_neox': 1137, 'n_words': 283}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: nixos: Add nixpkgs.pkgs option username_0: This lets the user set pkgs directly, so that it can be injected externally and be reused among evaluations of NixOS. ###### Motivation for this change This provides an easy and documented way to set `pkgs` for applications external to NixOS and it provides easy access to some extra flexibility for users of plain NixOS. ###### Things done <!-- Please check what applies. Note that these are not hard requirements but merely serve as information for reviewers. --> - [x] Tested using sandboxing ([nix.useSandbox](http://nixos.org/nixos/manual/options.html#opt-nix.useSandbox) on NixOS, or option `build-use-sandbox` in [`nix.conf`](http://nixos.org/nix/manual/#sec-conf-file) on non-NixOS) - Built on platform(s) - [x] NixOS - [ ] macOS - [ ] other Linux distributions - [ ] Tested via one or more NixOS test(s) if existing and applicable for the change (look inside [nixos/tests](https://github.com/NixOS/nixpkgs/blob/master/nixos/tests)) - [ ] Tested compilation of all pkgs that depend on this change using `nix-shell -p nox --run "nox-review wip"` - [ ] Tested execution of all binary files (usually in `./result/bin/`) - [ ] Fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md). --- I have tested this change manually using nix-repl (import ./nixos { configuration.imports = [ ({lib,...}: { nixpkgs.pkgs = lib.mkDefault (pkgs // { nginxStable = pkgs.hello; }); }) ({lib,...}: { nixpkgs.pkgs = null; }) ]; }).config.services.nginx.package -> nginx The first module defines a default pkgs. This is what NixOps could do, for example. The second module sets pkgs to null so we get the old behavior in this example. Omitting the second module has the effect of using the default pkgs from 'NixOps'. <issue_comment>username_0: @username_2 do we have a way to check if nixpkgs.config, nixpkgs.system or nixpkgs.overlays have been set by the user? <issue_comment>username_0: Note that NixOps should probably not always set `pkgs` by default, because that's a breaking change. Instead it should provide an opt-in mechanism. The opt-in setting can exist at the network level, machine level or both. Opting out is always possible at the machine level by settings `nixpkgs.pkgs = null`. <issue_comment>username_1: Another approach to preventing repeated evaluation of Nixpkgs: https://github.com/NixOS/nix/commit/0395b9b94af56bb814810a32d680c606614b29e0 <issue_comment>username_0: That's a very good idea, but does not address the general usefulness of being able to inject `pkgs` directly. If I already have a `pkgs` that I need to use, I don't want to jump through any hoops to use it for a NixOS. Also, `pkgs` provides an 'escape hatch' for modifications to NixPkgs that can not be achieved through the current `nixpkgs.*` options. That's the point of PR https://github.com/NixOS/nixpkgs/pull/32422 in which @username_2 suggested `externalPkgs`. That's what I've implemented here as `pkgs`, keeping the name simple. <issue_comment>username_0: Is there anything I can do to improve this more powerful option for setting `pkgs` in NixOS and/or get it merged? <issue_comment>username_2: Yes you can do: ``` { config, options, ...}: let isSpecialized = options.nixpkgs.overlays.isDefined && options.nixpkgs.config.isDefined && options.nixpkgs.system.isDefined; isOverwritten = options.nixpkgs.pkgs.isDefined; in … ``` While I understand what you are looking for, remember that this is a valid use-case to use `nixpkgs.pkgs` to ignore other options. I guess what you might look for is if there is a priority inversion, where a forced overlays / config / system is not taken into account. In such case we do not yet report the priority, but you might be able to extract it from the `opt.definitions`. <issue_comment>username_0: I wasn't able to determine whether options were being set. The module system would have to preserve the priority for that to work, which it doesn't expose. That is, it's not in the output of `nix-instantiate --strict --show-trace --eval -E '(import ./nixos { configuration = { }; }).options.nixpkgs.config'` <issue_comment>username_0: Ok, I've implemented everything as suggested. Detecting the priority inversion is outside the scope of this feature, because it requires changes to the module system. <issue_comment>username_3: I can't understand how this interacts with `nixos/lib/eval-config.nix`. Does it still need the `pkgs` and `pkgsModule`? <issue_comment>username_0: @username_3, this change does not interact much with `nixos/lib/eval-config.nix`. `eval-config.nix` did not really need to implement `pkgsModule` in the first place, but it seems convenient for callers of `eval-config.nix` to do so. Technically it could be removed, but at the cost of breaking some users' nix code. I suppose it could be deprecated, but I don't think that it would be worth the cost at this point. Your question sounds like it is related to PR #32422. That PR deals with calling NixOS from an evaluation of Nixpkgs. It is loosely related to this PR. The one does not require the other, but applying this PR first is more convenient. <issue_comment>username_0: @username_3 coming to think of it, it might be better if `eval-config.nix` uses `config.nixpkgs.pkgs` instead of its own `pkgsModule`. It should probably keep the `pkgs` argument for now though.
{'fraction_non_alphanumeric': 0.08760058522311631, 'fraction_numerical': 0.01005852231163131, 'mean_word_length': 4.1789772727272725, 'pattern_counts': {'":': 0, '<': 14, '<?xml version=': 0, '>': 15, 'https://': 4, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '17314475', 'n_tokens_mistral': 1705, 'n_tokens_neox': 1577, 'n_words': 717}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: opentoonz v1.0.3 not compatible for windows 10 username_0: Well I installed opentoonz and all fine but when I try to start it up it says isnt compatible and when I try to look how to fix it everyone says about a file called lvcod64.dll the proble is that I look for it and it doesn't exist, so what am I supposed to do? How can I fix this? <issue_comment>username_1: Hi Daynala, OpenToonz v1.0.3 is definitely compatible with Windows 10. Now we just need to discover why it isn't working for you. The first thing I would look to would be to make sure you've uninstalled any previous installation of OpenToonz. Then I would double check to make sure I'm installing v1.0.3 (as an older installation file can look the same too). The fix that required removing the .dll file should have been fixed back with the v1.0.2 release so you shouldn't need to do any of that with v1.0.3. We may need to know more about your system and setup but my gut feel is that the issue is with the initial installation which points me to those two things above which are: Make sure you don't have older installations causing problems Doube/triple check to make sure you are installing the right version (I emphasize this point because I have every installer of OpenToonz in my download folder and it can be difficult to know which is which). Keep us posted on your progress! <issue_comment>username_2: Take a look at #46 <issue_comment>username_3: Does not work with 32 bit.. switch to 64bits if you have.<issue_closed>
{'fraction_non_alphanumeric': 0.03636363636363636, 'fraction_numerical': 0.02012987012987013, 'mean_word_length': 4.223728813559322, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 1, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '24982460', 'n_tokens_mistral': 440, 'n_tokens_neox': 413, 'n_words': 264}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Cannot open encrypted library - always get error message about wrong password username_0: I cannot open the library with password - I always get the error message about wrong password library - even though I can open it in web interface or in desktop client with the same password and I triple-checked I enter the password correctly. I am running: * Seafile Pro Server (Free) version 4.3.3 * iOS Seafile Pro Client version 2.1.1 <issue_comment>username_0: I noticed that everytime I try to access encrypted library from the iOS client, I get this in web server access log - note HTTP return code 440: `"POST /api2/repos/<repo-id-removed>/?op=setpassword HTTP/1.1" 440 1045 "-" "SeafilePro/1 CFNetwork/711.5.6 Darwin/14.0.0"` Am I missing some configuration or is this not actually supported on iOS? <issue_comment>username_1: Can it work on seacloud.cc? <issue_comment>username_0: Yes, it works on seacloud.cc - is it something in my configuration? <issue_comment>username_0: Alright, I did a bit of sniffing and now I know what is going on: My password starts with semicolon (;) - apparently this messes up iOS client (on my installation as well as on seacloud.cc). Webclient and desktop client do not mind passwords starting with semicolon. Would it be possible to enable also these kind of passwords in iOS client? On the side-note: I've also noticed that web-interface does not make the same call as iOS client: iOS client calls `/api2/repos/<repo-id>/?op=setpassword`, while webclient calls `/repo/set_password/` and sends as a POST parameters `repo_id`, `username` and `password` (in cleartext!). <issue_comment>username_1: I will fix this in the next version. <issue_comment>username_1: fixed<issue_closed>
{'fraction_non_alphanumeric': 0.07025495750708215, 'fraction_numerical': 0.020963172804532578, 'mean_word_length': 4.570977917981073, 'pattern_counts': {'":': 0, '<': 11, '<?xml version=': 0, '>': 11, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '19733998', 'n_tokens_mistral': 516, 'n_tokens_neox': 493, 'n_words': 247}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: File is created even if extension is invalid username_0: We noticed that if you call `ImageBuffer::save` a path with an invalid extension, for example `foo.invalid`, it will create an empty file called `foo.invalid`. We're using `0.23.14`. It would be nice if the file weren't created if the extension is bad. Thanks for the awesome crate! <issue_comment>username_1: This has been fixed on `master`, I believe unintentionally as part of unrelated refactoring. Unfortunately, we're far overdue for a new release... <issue_comment>username_0: Nice, looking forward to it!
{'fraction_non_alphanumeric': 0.07413509060955518, 'fraction_numerical': 0.013179571663920923, 'mean_word_length': 5.141414141414141, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '16722281', 'n_tokens_mistral': 165, 'n_tokens_neox': 156, 'n_words': 85}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: react_component outside of view username_0: Here's a failing test case that demonstrates #634 <!-- Reviewable:start --> --- This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/shakacode/react_on_rails/636) <!-- Reviewable:end --> <issue_comment>username_1: @username_0 can you include the fix with the test? CI needs to pass before we can merge. <issue_comment>username_0: I'm not sure why CI failed. It seems to have failed during an npm install. <issue_comment>username_0: ![screen shot 2016-12-05 at 7 51 20 pm](https://cloud.githubusercontent.com/assets/12464281/20912449/49736390-bb24-11e6-9c8d-9a98afedf4a2.png) <issue_comment>username_1: @username_0 Please create a changelog entry. Once tests pass, I'll create a release.
{'fraction_non_alphanumeric': 0.12093023255813953, 'fraction_numerical': 0.07441860465116279, 'mean_word_length': 5.572519083969466, 'pattern_counts': {'":': 0, '<': 9, '<?xml version=': 0, '>': 9, 'https://': 3, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '8419390', 'n_tokens_mistral': 318, 'n_tokens_neox': 275, 'n_words': 81}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Yii2 Integration username_0: I used to use this widget longtime ago with Yii2 and it's working fine. Recently I returned to use it in a new Yii2 project using Bootstrap4. Honestly I find it very hard to get a complete running example using Yii2 for Ajax image upload. I've browsed the complete demo site and read most of the documentation, couldn't find a complete example. I need a complete guide tutorial how to use it with Yii2. Thanks in advance. <issue_comment>username_1: Something similar happens to me in a new project with Yii2 version 2.0.30 and bootstrap 4 and the following happens: Not send me the file after submitting, I use it in different projects and the code is exactly the same, I would like to know if there is a problem with the latest versions of Yii2, because I send with the fileInput field () which brings ActiveForm by default and when reviewing the array $ _FILES comes perfect, now if I use the FileInput widget of yours it already brings the empty array. I leave a little code: `$form->field($model, 'file_1')->widget(FileInput::class, [ 'options' => ['accept' => 'image/*']]);` Send nothing in $_FILES: `$form->field($model, 'file_1')->fileInput();` Send the file correctly...Can someone help me please? Cheers.
{'fraction_non_alphanumeric': 0.06825153374233128, 'fraction_numerical': 0.01303680981595092, 'mean_word_length': 4.262096774193548, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 9, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '26302328', 'n_tokens_mistral': 362, 'n_tokens_neox': 335, 'n_words': 198}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Fix documentation links in basics.md username_0: Links to other files need to be either relative to themselves (doc/performance.md -> performance.md) or absolute (doc/performance.md -> /doc/performance.md). This change fixes the documentation when read on GitHub. <issue_comment>username_1: Somehow, you managed to break the build ! (The tests are all failing.) :-\ <issue_comment>username_1: You are of course correct. But why would CI fail? <issue_comment>username_1: Whatever, I am merging. This is just documentation. <issue_comment>username_0: Looks like a circleci issue, although I don't see anything noticeable on circleci or github's status pages that would line up. None of the circleci tests were able to checkout the code from GitHub since SSH failed.
{'fraction_non_alphanumeric': 0.06892230576441102, 'fraction_numerical': 0.006265664160401002, 'mean_word_length': 5.1937984496124034, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 8, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '1812157', 'n_tokens_mistral': 215, 'n_tokens_neox': 197, 'n_words': 104}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Server/Client: Role based Object ownership and CRUD permissions username_0: **Is your feature request related to a problem? Please describe.** When setting permission roles, we could do with the concept of a ownership domain as well as the existing granularity **Describe the solution you'd like** It would be great if you could set the CRUD operations for the role in one of the following domains. - Objects in Model the User has created - Objects in Model created by any User with the same Role - Objects in Model created by any User for a specific Role - All Objects in Model I would be even better if these were additive so I could have a user that could manage Objects they had created but also manage those for a different Role too. ** Implementation Requirements ** I would imagine that the base object model would need to include a Created By property for all Objects as a minimum ** Example ** - Basic App User - can only CRUD objects they have created - Manager User - can CRUD all objects created by Basic App Users (or those in their Role) - Super Admin - can CRUD all objects
{'fraction_non_alphanumeric': 0.036490008688097306, 'fraction_numerical': 0.0008688097306689834, 'mean_word_length': 3.760330578512397, 'pattern_counts': {'":': 0, '<': 2, '<?xml version=': 0, '>': 2, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '11434173', 'n_tokens_mistral': 286, 'n_tokens_neox': 264, 'n_words': 186}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: PlotWidget add label not working? username_0: I try to use PlotWidget to plot a single view in a window, but when I try to add label, the label work strange. ```python class Chart1(pg.PlotWidget): def __init__(self): super().__init__() self.label = pg.LabelItem(justify="right") self.addItem(self.label) self.data1 = np.random.normal(size=100) self.data2 = np.random.normal(size=100) self.pdi1 = self.plot(self.data1, pen="w") self.pdi2 = self.plot(self.data2, pen="g") self.vLine = pg.InfiniteLine(angle=90, movable=False) self.hLine = pg.InfiniteLine(angle=0, movable=False) self.plotItem.addItem(self.vLine, ignoreBounds=True) self.plotItem.addItem(self.hLine, ignoreBounds=True) self._sigProxy = pg.SignalProxy(self.scene().sigMouseMoved, rateLimit=60, slot=self.mouseMoved) def mouseMoved(self, evt): pos = evt[0] ## using signal proxy turns original arguments into a tuple vb = self.plotItem.getViewBox() if self.sceneBoundingRect().contains(pos): mousePoint = vb.mapSceneToView(pos) index = int(mousePoint.x()) self.label.setText(str(index)) if index > 0 and index < len(self.data1): self.label.setText("<span style='font-size: 12pt'>x=%0.1f, <span style='color: red'>y1=%0.1f</span>, <span style='color: green'>y2=%0.1f</span>" % (mousePoint.x(), self.data1[index], self.data2[index])) self.vLine.setPos(mousePoint.x()) self.hLine.setPos(mousePoint.y()) ``` ![image](https://user-images.githubusercontent.com/6836400/128445285-7bcd103f-698a-4c0c-89ea-fa6163c19534.png) <issue_comment>username_1: I suspect you want to add the label to the embedded PlotItem instead of to the PlotWidget itself. ```python # self.addItem(self.label) # replace the above with self.getPlotItem().addItem(self.label) ``` <issue_comment>username_2: LabelItem is a GraphicsWidget, so it's meant more for being added to a (Q)GraphicsLayout. [TextItem](https://pyqtgraph.readthedocs.io/en/latest/graphicsItems/textitem.html) is intended for use within a ViewBox. <issue_comment>username_0: @username_1 Hi, It has not change anything, It works the same as I my post. <issue_comment>username_0: @username_1 Maybe I should use `GraphicsLayout` to it.<issue_closed> <issue_comment>username_2: @username_0 Does replacing your LabelItem with a TextItem not work? You'd need to use something like `anchor=(1, 0)` instead of `justify='right'` and `setHtml` in place of `setText`, but otherwise it should do what you want. To be clear, LabelItem isn't really working incorrectly here per se -- ViewBox inverts the y axis (Qt uses the y positive down convention) and stretches things according to the current view range. Adding a LabelItem as a child of a ViewBox (which is what `PlotWidget.addItem` ultimately does) means it inherits those transformations. TextItem specifically inverts those transformations so that the text appears the same regardless of ViewBox scaling. Then again if you don't really need the text to be within the plot itself, you can use a GraphicsLayout and a LabelItem. <issue_comment>username_1: I figured this out the hard way while trying to do a custom graphics item with text; probably should put a blurb in the docs about this consideration when doing a custom graphics item.
{'fraction_non_alphanumeric': 0.1015358361774744, 'fraction_numerical': 0.02303754266211604, 'mean_word_length': 3.5794270833333335, 'pattern_counts': {'":': 0, '<': 15, '<?xml version=': 0, '>': 15, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '17153115', 'n_tokens_mistral': 1103, 'n_tokens_neox': 1034, 'n_words': 344}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Cleanup the README username_0: The README is a holdover from a previous doc, so it needs some cleanup. I'll go through and do the basics as part of adding the badges, but this will need some general updates and markdown styling as well. We may also want to rip it out of the README and perhaps have a Wiki for this. cc @mkanoor @gmcculloug @tinaafitz <issue_comment>username_0: @mkanoor @gmcculloug @tinaafitz Bump 🐢
{'fraction_non_alphanumeric': 0.04814004376367615, 'fraction_numerical': 0.00437636761487965, 'mean_word_length': 4.654320987654321, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '1831223', 'n_tokens_mistral': 150, 'n_tokens_neox': 138, 'n_words': 72}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Stun server username_0: Hi All First of all thisbproject is awesome, I have been looking through the sample code, didn't find a way to build a stun server, there are examples for a turn, can any.one point me to the right direction of build an stun from this Thank you advance. Jason<issue_closed> <issue_comment>username_1: Hey @username_0 sorry for the delay on this. This is a STUN server also! The logic for the server side is [here](https://github.com/pion/turn/blob/master/internal/server/stun.go) If you are interested it would be a great contribution to update the README to mention this :)
{'fraction_non_alphanumeric': 0.060092449922958396, 'fraction_numerical': 0.004622496147919877, 'mean_word_length': 4.2, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '10632355', 'n_tokens_mistral': 189, 'n_tokens_neox': 180, 'n_words': 93}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Major update username_0: Hello, I did a bit of work on this project, main points are: - Compile with gcc 4.9 - Cmake project - Raw literal to put raw assembly - Add syscall instructions - Test using catch with coverage and html report - Travis config - Use original xml file from Maratyszcza/Opcodes to generates instructions Have a nice day ;) <issue_comment>username_1: This is awesome! Very nice work. Hopefully this will make any future contributions much easier.
{'fraction_non_alphanumeric': 0.050387596899224806, 'fraction_numerical': 0.007751937984496124, 'mean_word_length': 3.9711538461538463, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 1}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '22676504', 'n_tokens_mistral': 150, 'n_tokens_neox': 141, 'n_words': 71}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: `npm run compile` should use node modules truffle version for contracts compilation username_0: **Problem** For compilation of contracts, publisher global truffle version is used. This is a problem since truffle global version can be different everytime mosaic contracts npm is published. ``` "compile": "truffle compile", ``` **Solution** Use node modules truffle version for contracts compilation ``` "compile": "./node_modules/.bin/truffle compile", ``` Fixes #801 <issue_comment>username_0: Adding the same comment which I added in ticket @pgev Thank you for the comment. I did a quick test and it seems you are right. Strangely contracts on etherscan are being verified on different solc versions(other than 0.5.0). Please see the report for contracts verification. https://github.com/mosaicdao/mosaic-pm/blob/master/specification-discussions/20191206-etherscan-verification.md#chain-contracts-verification This triggered that when publishing package, somehow compilation was done on different solc versions. Any idea how could that happen? <issue_comment>username_0: Closing the PR as description of ticket has changed.
{'fraction_non_alphanumeric': 0.07257383966244725, 'fraction_numerical': 0.014345991561181435, 'mean_word_length': 5.020304568527918, 'pattern_counts': {'":': 2, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '9451257', 'n_tokens_mistral': 330, 'n_tokens_neox': 304, 'n_words': 137}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Ie not supported username_0: start RAND Today I had to use IE11 to go to my server and got the unpleasant and very annoying warning that IE is no longer supported. Why not????? It is still a supported browser by Microsoft. Not all people are able to use Edge, or something else. Also one warning is enough, it gets very annoying at each mouse click. end RAND Otherwise, i am a big fan of the theme. <issue_comment>username_1: Hi, It's been discussed already. Please have a look. https://github.com/username_1/authentic-theme/issues/818#issuecomment-351792470<issue_closed> <issue_comment>username_0: OK, had looked for an existing issue, as I could not image no one else complained. Will continue there
{'fraction_non_alphanumeric': 0.06016042780748663, 'fraction_numerical': 0.02406417112299465, 'mean_word_length': 4.507352941176471, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '22579345', 'n_tokens_mistral': 219, 'n_tokens_neox': 200, 'n_words': 108}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Use HttpStatusCode to decode response from Splunk username_0: When emitting batched events, log response from Splunk by reading it’s the HttpStatus code as task.IsCompletedSuccessfully doesn’t always translate to success. (Ref: http://docs.splunk.com/Documentation/Splunk/7.0.2/RESTUM/RESTusing)<issue_closed> <issue_comment>username_1: When emitting batched events, log response from Splunk by reading it’s the HttpStatus code as task.IsCompletedSuccessfully doesn’t always translate to success. (Ref: http://docs.splunk.com/Documentation/Splunk/7.0.2/RESTUM/RESTusing)<issue_closed>
{'fraction_non_alphanumeric': 0.08562197092084006, 'fraction_numerical': 0.012924071082390954, 'mean_word_length': 6.560975609756097, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '3578038', 'n_tokens_mistral': 187, 'n_tokens_neox': 174, 'n_words': 56}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: 编写中文的时候持续的闪烁 username_0: Issue Type: <b>Bug</b> 在使用编辑器输入中文的时候,编辑器会持续的闪烁,非常的刺眼 VS Code version: Code 1.64.2 (f80445acd5a3dadef24aa209168452a3d97cc326, 2022-02-09T22:00:58.347Z) OS version: Darwin x64 20.6.0 Restricted Mode: No <details> <summary>System Info</summary> |Item|Value| |---|---| |CPUs|Intel(R) Core(TM) i5-6267U CPU @ 2.90GHz (4 x 2900)| |GPU Status|2d_canvas: enabled<br>gpu_compositing: enabled<br>metal: disabled_off<br>multiple_raster_threads: enabled_on<br>oop_rasterization: enabled<br>opengl: enabled_on<br>rasterization: enabled<br>skia_renderer: disabled_off_ok<br>video_decode: enabled<br>webgl: enabled<br>webgl2: enabled| |Load (avg)|4, 10, 13| |Memory (System)|16.00GB (1.43GB free)| |Process Argv|--crash-reporter-id 155ce01a-4c7a-4ede-9f3f-458a21b1ac09| |Screen Reader|no| |VM|0%| </details><details><summary>Extensions (23)</summary> Extension|Author (truncated)|Version ---|---|--- gitlens|eam|11.7.0 vscode-lombok|Gab|1.0.1 plantuml|jeb|2.17.2 vscode-docker|ms-|1.19.0 remote-containers|ms-|0.217.4 remote-ssh|ms-|0.74.0 remote-ssh-edit|ms-|0.74.0 remote-wsl|ms-|0.64.2 vscode-remote-extensionpack|ms-|0.21.0 vscode-thunder-client|ran|1.12.0 java|red|1.3.0 vscode-xml|red|0.19.1 vscode-yaml|red|1.4.0 open-in-browser|tec|2.0.0 html-preview-vscode|tht|0.2.5 vscodeintellicode|Vis|1.2.17 vscode-java-debug|vsc|0.38.0 vscode-java-dependency|vsc|0.19.0 vscode-java-pack|vsc|0.21.0 vscode-java-test|vsc|0.34.0 vscode-maven|vsc|0.35.0 vscode-spring-initializr|vsc|0.8.0 vim|vsc|1.22.1 </details><details> <summary>A/B Experiments</summary> ``` vsliv368cf:30146710 vsreu685:30147344 python383:30185418 vspor879:30202332 vspor708:30202333 vspor363:30204092 vstes627:30244334 pythonvspyl392:30425749 pythontb:30283811 pythonvspyt551:30345470 pythonptprofiler:30281270 vshan820:30294714 vstes263:30335439 vscorecescf:30438341 pythondataviewer:30285071 vscod805cf:30301675 pythonvspyt200:30340761 binariesv615:30325510 bridge0708:30335490 bridge0723:30353136 vsaa593:30376534 vsc1dst:30438360 pythonvs932:30410667 wslgetstarted:30433507 vsclayoutctrc:30437038 vsrem710:30416614 vsbas813:30436447 vscscmwlcmt:30436993 helixcf:30438276 cppdebug:30437093 ``` </details> <!-- generated by issue reporter --> <issue_comment>username_0: I'm sorry this bug is a bug of vim extention. The text always twinkle when I type character.
{'fraction_non_alphanumeric': 0.1440536013400335, 'fraction_numerical': 0.2056113902847571, 'mean_word_length': 7.181506849315069, 'pattern_counts': {'":': 0, '<': 28, '<?xml version=': 0, '>': 28, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 1}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '2169122', 'n_tokens_mistral': 1395, 'n_tokens_neox': 1135, 'n_words': 145}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: fix(image-loader): use tempDirectory on iOS username_0: The tempDirectory can be accessed from the WKWebView as well; no need for base64! <issue_comment>username_0: As the file reader is not used any more it should also fix issues #37 and #31 <issue_comment>username_1: According to the File plugin documentation, the tempDirectory is meant for temporary files that might not persist across app restarts. I haven't played around with the `tempDirectory` enough to figure out how it works. Did you experiment with it to see if the cache persists over a few sessions at least? What if we add an option to customize the iOS behavior, to make sure everybody is happy. Available options can be: - `base64`: returns a base64 image - `tempDir`: implements your solution - `disable` disable the caching completely and return the original URL Not sure how the new Ionic WKWebView fork updates work, but I think they serve files via `http://` protocol instead of `file:///`. And I think the Base64 option will be helpful to develop with `ionic run <platform> --livereload`, so it might be a good idea to keep the functionality around. <issue_comment>username_0: In my experience the `tempDirectory` is usually persistent, I guess it is only removed if the device runs out of available space. But I agree, it could be a good idea to keep the base64 option, even though I cannot recommend using it on a device, at least not in production. I will implement the option next week! Cheers! <issue_comment>username_1: Thanks @username_0 <issue_comment>username_0: Alright, have a look at it! Now all the files are copied to the temporary directory, if needed. This way there should be no problems with the persistence of the directory… Also I did use readAsDataURL to avoid directly using the FileReader, which can lead to some problems (#31). I did not(!) include the 'disable' option, as I was not completely sure, if this is really necessary; I don't really see the benefit of it. <issue_comment>username_1: Looks good, thanks @username_0 !
{'fraction_non_alphanumeric': 0.05182341650671785, 'fraction_numerical': 0.01199616122840691, 'mean_word_length': 4.589812332439679, 'pattern_counts': {'":': 0, '<': 9, '<?xml version=': 0, '>': 9, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '20627346', 'n_tokens_mistral': 551, 'n_tokens_neox': 522, 'n_words': 326}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: flag.finalize() shouldn't return program name username_0: **V version:** V 0.1.27 ebbf42d **OS:** macOS 10.15 **What did you do?** ```go module main import os import flag fn main() { mut fp := flag.new_flag_parser(os.args) fp.string('aaa', `a`, 'foo', 'aaaah') optarg := fp.finalize() or { eprintln(err) return } eprintln('$optarg') assert(optarg.len == 0) } ``` **What did you expect to see?** According to the help message in the finalize() function it should be returning the arguments passed to the program but it is returning an array of strings containing all the arguments (except the ones starting with a dash). Imho it shouldn't be returning the program name in [0] **What did you see instead?** ``` $ ./f a b -a c d ['./f', 'a', 'b', 'd'] f.v:14: FAIL: fn main(): assert optarg.len == 0 $ ./f ['./f'] f.v:14: FAIL: fn main(): assert optarg.len == 0 $ ``` <issue_comment>username_1: It is true that fp.finalize() does return the same as `fp.args`, but it also first checks for errors. <issue_comment>username_1: flag.new_flag_parser(os.args) may become `flag.new_flag_parser(os.args[1..])` if you do not want the program name in the result of finalize. Another option, if you want to pass os.args, is to call: `fp.skip_executable()` before the `fp.finalize()` call.<issue_closed> <issue_comment>username_0: Thanks!
{'fraction_non_alphanumeric': 0.13440860215053763, 'fraction_numerical': 0.015456989247311828, 'mean_word_length': 3.0794520547945203, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '25023980', 'n_tokens_mistral': 523, 'n_tokens_neox': 481, 'n_words': 183}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Insert into S3 may result in "out of heap" crash on executor username_0: `TrinoS3StreamingOutputStream` eagerly allocates a buffer on construction: https://github.com/trinodb/trino/blob/master/plugin/trino-hive/src/main/java/io/trino/plugin/hive/s3/TrinoS3FileSystem.java#L1499. The default buffer size is `16MB` (https://github.com/trinodb/trino/blob/master/plugin/trino-hive/src/main/java/io/trino/plugin/hive/s3/HiveS3Config.java#L70) and the streaming upload is enabled by default: https://github.com/trinodb/trino/blob/master/plugin/trino-hive/src/main/java/io/trino/plugin/hive/s3/HiveS3Config.java#L69. Currently this memory is not accounted anywhere. When inserting into a table with a high number of buckets/partitions this unaccounted memory can push a worker out of heap space.
{'fraction_non_alphanumeric': 0.10469314079422383, 'fraction_numerical': 0.02286401925391095, 'mean_word_length': 4.942857142857143, 'pattern_counts': {'":': 0, '<': 2, '<?xml version=': 0, '>': 2, 'https://': 3, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '264539', 'n_tokens_mistral': 293, 'n_tokens_neox': 277, 'n_words': 67}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Add and space between images username_0: Hi. First of all, thanks @denzcoskun for share with us your awesome ImageSlider. I´m trying to add and space between my images using Android Studio, the Slider looks fine, but I think if the images wasnt collapsed could be better. I tried some xml properties but didnt worked at the moment. Can you please help me with this? Thanks!
{'fraction_non_alphanumeric': 0.04116222760290557, 'fraction_numerical': 0.002421307506053269, 'mean_word_length': 4.52, 'pattern_counts': {'":': 0, '<': 2, '<?xml version=': 0, '>': 2, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 1}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '22356240', 'n_tokens_mistral': 107, 'n_tokens_neox': 110, 'n_words': 66}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Command line recursive globbing does not work on Windows username_0: <compiled output> <issue_comment>username_1: Chad I think you looked at something similiar recently? <issue_comment>username_2: +1 on this, in my scenario: `google-closure-compiler index.js "./lib/**/*.js" --process_common_js_modules --module_resolution NODE` results in ``` { Error: ENOENT: no such file or directory, open 'C:\Projects\FRSource\FRS-hide-scrollbar\lib\**\*.js' at Object.fs.openSync (fs.js:646:18) at Object.fs.readFileSync (fs.js:551:33) at vvd (C:\Projects\FRSource\FRS-hide-scrollbar\node_modules\google-closure-compiler\jscomp.js:6455:186) at ovd (C:\Projects\FRSource\FRS-hide-scrollbar\node_modules\google-closure-compiler\jscomp.js:7882:451) at XO (C:\Projects\FRSource\FRS-hide-scrollbar\node_modules\google-closure-compiler\jscomp.js:1938:29) at $O (C:\Projects\FRSource\FRS-hide-scrollbar\node_modules\google-closure-compiler\jscomp.js:3371:44) at C:\Projects\FRSource\FRS-hide-scrollbar\node_modules\google-closure-compiler\jscomp.js:3573:46 at CompilerJS.run (C:\Projects\FRSource\FRS-hide-scrollbar\node_modules\google-closure-compiler\lib\node\closure-compiler-js.js:54:17) at getFilesFromStdin.then.inputFiles (C:\Projects\FRSource\FRS-hide-scrollbar\node_modules\google-closure-compiler\cli.js:149:31) at <anonymous> at process._tickCallback (internal/process/next_tick.js:188:7) at Function.Module.runMain (module.js:695:11) at startup (bootstrap_node.js:188:16) at bootstrap_node.js:609:3 errno: -4058, code: 'ENOENT', syscall: 'open', path: 'C:\\Projects\\FRSource\\FRS-hide-scrollbar\\lib\\**\\*.js' } ``` <issue_comment>username_3: There's a couple of issues going on here. For instance #3087 is also applicable. @username_2 That glob pattern isn't recognized by Closure Compiler. `"./lib/**.js"` would be the correct version. <issue_comment>username_2: @username_3 but that means closure-compiler doesn't support minimatch-style glob (as stated in README). You think it should be noted in readme or maybe I should file a bug about that? <issue_comment>username_4: I have a similar problem - I'm trying to exclude files and it's not working. I have `--js=!**template.js` in my command line but it still tries to process template.js. I assumed the problem was caused by escaping, so I downloaded the source code and modified it to output the args to stdout but the arg makes it all the way into main without getting messed up. <issue_comment>username_4: I found a few issues: - the exclude filename is a relative path and the filename from the include is an absolute path so they aren't equal and the file gets included when it shouldn't - the excludes and includes are processed in the same loop so the order of parameters is significant, I don't know if this is intended - it searches through all subdirectories of the current directory looking for files matching the exclude glob and adding them to the exclude list which can be slow if the current directory contains a lot of files and folders. If possible it would be quicker to compare the included files to the excluded globs <issue_comment>username_5: Created Google internal issue http://b/122664915
{'fraction_non_alphanumeric': 0.09785563273935367, 'fraction_numerical': 0.029296285110238598, 'mean_word_length': 4.333333333333333, 'pattern_counts': {'":': 0, '<': 11, '<?xml version=': 0, '>': 11, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 1, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '29510079', 'n_tokens_mistral': 1087, 'n_tokens_neox': 1007, 'n_words': 339}