source
stringclasses
1 value
text
stringlengths
152
659k
filtering_features
stringlengths
402
437
source_other
stringlengths
440
819k
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Io.rancher.container.ip Docker label not set on containers username_0: **Rancher versions:** rancher/server: 1.5.9 rancher/agent: 1.2.2 **Infrastructure Stack versions:** healthcheck: 0.2.0 ipsec: holder network-services: 0.6.6 & 0.91 (Metadata) scheduler: 0.7.5 **Docker version: (`docker version`,`docker info` preferred)** Client: Version: 17.03.1-ce API version: 1.27 Go version: go1.7.5 Git commit: c6d412e Built: Mon Mar 27 17:14:09 2017 OS/Arch: linux/amd64 Server: Version: 17.03.1-ce API version: 1.27 (minimum version 1.12) Go version: go1.7.5 Git commit: c6d412e Built: Mon Mar 27 17:14:09 2017 OS/Arch: linux/amd64 Experimental: false **Operating system and kernel: (`cat /etc/os-release`, `uname -r` preferred)** NAME="Ubuntu" VERSION="16.04.2 LTS (Xenial Xerus)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 16.04.2 LTS" VERSION_ID="16.04" HOME_URL="http://www.ubuntu.com/" SUPPORT_URL="http://help.ubuntu.com/" BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" VERSION_CODENAME=xenial UBUNTU_CODENAME=xenial Kernal: 4.4.0-1017-aws **Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)** AWS **Setup details: (single node rancher vs. HA rancher, internal DB vs. external DB)** HA rancher, external db **Environment Template: (Cattle/Kubernetes/Swarm/Mesos)** Cattle **Steps to Reproduce:** Deploy stack, go to it's container, see that Io.rancher.container.ip is sometimes missing **Results:** Here's a screenshot of one of our ElasticSearch containers: ![Uploading Screenshot from 2017-05-30 16-35-11.png…]() You can see that the Io.rancher.container.ip label is missing. This is a major problem for us as we use DataDog for our monitoring and it relies on this label being present as part of its service detection. As far as I can see in the documentation this label should be set for all containers and container the current assigned managed IP address. <issue_comment>username_1: @username_0 Is there more context to when this is happening? "Sometimes missing" is that 1 stack sometimes it does, sometimes it does not? How often does this occur? <issue_comment>username_0: Looking through the containers it seems to be isolated to just that stack... All containers in that stack seem to be missing it however other stacks in the same environment seem to be fine, including containers running on the same boxes as the "infrastructure" stack that is missing it. <issue_comment>username_2: I Have the similar issue. Whenever i use "Retain IP" option, io.rancher.container.ip label not set. <issue_comment>username_3: @username_2 is correct, when setting "Retain IP" the `io.rancher.container.ip` isn't set. This is a bug right, or is there a reasoning behind this? <issue_comment>username_4: This turned out to be a rather huge problem for us as well - as we use this label to identify containers in prometheus, we effectively can't track instances that use `retain_ip: true`. It seems this is due to the fact that the label is added on instance.create in the rancher-agent (this is where it should be added https://github.com/rancher/agent/blob/v0.12.0/core/compute/compute_unix.go#L159) and my guess is that a **retained** ip is not known *yet*. The rancher-agent does not seem to differentiate between retained/non-retained containers so I guess the *bug* is actually within the service that sends out the `compute.instance.activate` event (https://github.com/rancher/agent/blob/v0.12.0/handlers/common.go#L39), as it should populate the `instanceHostMap.instance.nics[0].ipAddresses`. (https://github.com/rancher/agent/blob/v0.12.0/handlers/test_events/instance_activate_basic#L74) <issue_comment>username_0: After more investigation this does indeed seem to happens when retain IP is set. Is there any update on this issue as it continues to cause problems for services that depend on that label for container IP discovery? <issue_comment>username_5: @username_1 any updates on this? Is there a workaround to set that label manually? It breaks the datadog integration. <issue_comment>username_6: We also had this issue and we use this workaround for services with scale 1. if that does not work we clone the service, like aerospike1, aerospike2, aerospike3. After the service is up we update the service to enable "retain ip" and we add manually the label io.rancher.container.ip=xx.xx.xx.xx/xx with IP the container has received. Not beautiful but it works. <issue_comment>username_7: With the release of Rancher 2.0, development on v1.6 is only limited to critical bug fixes and security patches.<issue_closed>
{'fraction_non_alphanumeric': 0.08633856597369538, 'fraction_numerical': 0.03309291472210437, 'mean_word_length': 4.256410256410256, 'pattern_counts': {'":': 0, '<': 12, '<?xml version=': 0, '>': 12, 'https://': 3, 'lorem ipsum': 0, 'www.': 1, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '2548998', 'n_tokens_mistral': 1566, 'n_tokens_neox': 1405, 'n_words': 587}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: isconcrete not defined username_0: On both master and latest tagged release: ``` julia> using ColorTypes INFO: Precompiling module ColorTypes. ERROR: LoadError: UndefVarError: isconcrete not defined Stacktrace: [1] include_from_node1(::String) at ./loading.jl:569 [2] include(::String) at ./sysimg.jl:14 [3] anonymous at ./<missing>:2 while loading /local/home/fredrikb/.julia/v0.6/ColorTypes/src/ColorTypes.jl, in expression starting on line 9 ERROR: Failed to precompile ColorTypes to /local/home/fredrikb/.julia/lib/v0.6/ColorTypes.ji. Stacktrace: [1] compilecache(::String) at ./loading.jl:703 [2] _require(::Symbol) at ./loading.jl:490 [3] require(::Symbol) at ./loading.jl:398 julia> versioninfo() Julia Version 0.6.0 Commit 9036443 (2017-06-19 13:05 UTC) Platform Info: OS: Linux (x86_64-pc-linux-gnu) CPU: Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz WORD_SIZE: 64 BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell) LAPACK: libopenblas64_ LIBM: libopenlibm LLVM: libLLVM-3.9.1 (ORCJIT, haswell) ``` <issue_comment>username_1: I can't reproduce this on 0.6. what does the following say: ```julia Pkg.status("ColorTypes") Pkg.status("Compat") ``` <issue_comment>username_0: I have tried checking out master on both Compat and ColorTypes but the problem persists ```julia julia> Pkg.status("ColorTypes") - ColorTypes 0.6.4 master julia> Pkg.status("Compat") - Compat 0.32.0 master ``` <issue_comment>username_1: `isconcrete` seems to have been introduced to ColorTypes with https://github.com/JuliaGraphics/ColorTypes.jl/commit/ee160a9a46a702634a7e6dc2f99ed1e713cdd80c . Maybe @username_3 has an idea what your problem could be? I don't experience any issues. <issue_comment>username_2: Update your packages? And make sure you're running the latest Compat. <issue_comment>username_3: Most likely a local setup issue. The backtrace suggests that you don't actually have an up to date Compat.jl. Possible cause includes having something in `LOAD_PATH`, having a dirty `Compat`, having `Compat` in `userimg.jl` etc. <issue_comment>username_0: You're right @username_3 , a package in userimage was using Compat. Thanks, I would have never figured that out myself!<issue_closed> <issue_comment>username_4: @username_0 Could you share your solution here? (I know this is not discource. But it is the first and the only exact entry when googled.) <issue_comment>username_0: I recompiled the system image, in my case the commands were ```julia include(joinpath(JULIA_HOME, Base.DATAROOTDIR, "julia", "build_sysimg.jl")) build_sysimg(default_sysimg_path(), "native", "userimage.jl", force=true) ``` <issue_comment>username_3: @username_0 Do **NOT** do it unless you are also compiling your own sysimg with a `userimg.jl`. <issue_comment>username_0: What is the difference between running the command `build_sysimg` and compiling my own sysimg? <issue_comment>username_3: None. I just mean that the google search that led @username_4 here is unlikely userimg related in which case building the sysimg is most likely not the solution. <issue_comment>username_2: @username_4, the first thing to check is `Pkg.status("Compat")`. If it says "dirty" or anything less than the version in the [ColorTypes REQUIRE file](https://github.com/JuliaGraphics/ColorTypes.jl/blob/master/REQUIRE), then you need to update your packages and/or clear out any modifications you've made locally to Compat and then update. <issue_comment>username_4: The Compat status looks good. I also tried deleted `~/.julia/v0.6/Compat` and `~/.julia/.cache/Compat`, and then reinstalled Compat. ```julia julia> Pkg.status("Compat") - Compat 0.32.0 julia> using Plots INFO: Precompiling module PlotUtils. ERROR: LoadError: UndefVarError: isconcrete not defined Stacktrace: [1] include_from_node1(::String) at ./loading.jl:569 [2] include(::String) at ./sysimg.jl:14 [3] anonymous at ./<missing>:2 while loading /home/lizz/.julia/v0.6/ColorTypes/src/ColorTypes.jl, in expression starting on line 9 ERROR: LoadError: Failed to precompile ColorTypes to /home/lizz/.julia/lib/v0.6/ColorTypes.ji. Stacktrace: [1] compilecache(::String) at ./loading.jl:703 [2] _require(::Symbol) at ./loading.jl:456 [3] require(::Symbol) at ./loading.jl:398 [4] include_from_node1(::String) at ./loading.jl:569 [5] include(::String) at ./sysimg.jl:14 [6] anonymous at ./<missing>:2 while loading /home/lizz/.julia/v0.6/Colors/src/Colors.jl, in expression starting on line 5 ERROR: LoadError: Failed to precompile Colors to /home/lizz/.julia/lib/v0.6/Colors.ji. Stacktrace: [1] compilecache(::String) at ./loading.jl:703 [2] _require(::Symbol) at ./loading.jl:456 [3] require(::Symbol) at ./loading.jl:398 [4] include_from_node1(::String) at ./loading.jl:569 [5] include(::String) at ./sysimg.jl:14 [6] anonymous at ./<missing>:2 while loading /home/lizz/.julia/v0.6/PlotUtils/src/PlotUtils.jl, in expression starting on line 7 ERROR: LoadError: Failed to precompile PlotUtils to /home/lizz/.julia/lib/v0.6/PlotUtils.ji. Stacktrace: [1] compilecache(::String) at ./loading.jl:703 [2] _require(::Symbol) at ./loading.jl:490 [3] require(::Symbol) at ./loading.jl:398 [4] include_from_node1(::String) at ./loading.jl:569 [5] eval(::Module, ::Any) at ./boot.jl:235 [6] _require(::Symbol) at ./loading.jl:483 [7] require(::Symbol) at ./loading.jl:398 while loading /home/lizz/.julia/v0.6/Plots/src/Plots.jl, in expression starting on line 13 julia> versioninfo() Julia Version 0.6.0 Commit 9036443 (2017-06-19 13:05 UTC) Platform Info: OS: Linux (x86_64-pc-linux-gnu) CPU: Intel(R) Core(TM) i7-4930K CPU @ 3.40GHz WORD_SIZE: 64 BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge) LAPACK: libopenblas64_ LIBM: libopenlibm LLVM: libLLVM-3.9.1 (ORCJIT, ivybridge) ``` <issue_comment>username_2: What happens if you start a fresh Julia session and just type `Compat`? Here's what it does for me: ```julia julia> Compat ERROR: UndefVarError: Compat not defined ``` But if you get this instead: ```julia julia> Compat Compat ``` and you don't have anything in your `.juliarc.jl` file that would force the loading of Compat (you could start julia with `julia --startup-file=no` to be sure), then that means you've compiled a `userimg.jl` that loads Compat. So then you should indeed follow @username_0's solution. <issue_comment>username_4: Thanks for the diagnose instruction. You are right, I compiled `userimg.jl` for loading `OhMyREPL` a while ago and have then completely forgot about it. Everything is okay on another machine so thanks!
{'fraction_non_alphanumeric': 0.11561119293078057, 'fraction_numerical': 0.0406480117820324, 'mean_word_length': 4.284824902723735, 'pattern_counts': {'":': 0, '<': 22, '<?xml version=': 0, '>': 31, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '2760677', 'n_tokens_mistral': 2507, 'n_tokens_neox': 2297, 'n_words': 732}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Go Mod Changes for Flag Declaration username_0: This is a cascading change for flag declaration from Podman. Adding a new function called StringSliceVarMult() which places the memory location of the flag in multiple variables allowing pod create options and container create options to be automatically aligned spf13/pflag is infrequently checked and does not accept PRs on a regular basis so using our own fork makes sense for customization <issue_comment>username_1: I agree, this has got to go upstream or be fixed some other way. <issue_comment>username_0: ok thanks for the feedback @username_1 @Luap99 I wasn't sure of the best way to proceed and thought this might work. I will try to implement something that achieves the same result on the podman side of things.
{'fraction_non_alphanumeric': 0.028255528255528257, 'fraction_numerical': 0.009828009828009828, 'mean_word_length': 4.992647058823529, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '26810193', 'n_tokens_mistral': 204, 'n_tokens_neox': 193, 'n_words': 123}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Make webhook sample notifications configurable username_0: Hey guys, while working on the Braintree integration for Codeship.io, we needed to test Braintree webhooks more thoroughly than your current testing facilities enable (see issues #64 and #65). I came up with a factory approach as shown here: https://gist.github.com/username_0/819ca167580f734b9e26. Is there any interest from your side in integrating this? If yes, I could add tests and turn it into a proper pull request ... Cheers, - Clemens <issue_comment>username_1: Closing. @username_0 feel free to reopen if you have a PR you want to submit.<issue_closed>
{'fraction_non_alphanumeric': 0.057057057057057055, 'fraction_numerical': 0.03453453453453453, 'mean_word_length': 4.75, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '14264417', 'n_tokens_mistral': 197, 'n_tokens_neox': 182, 'n_words': 89}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: stats with all zeros for clients > 1 username_0: Locust 0.8.1 Getting stats with all zeros when # clients > 1 Same locustfile.py (very basic) produces good stats with single client. Am I missing something... obvious?<issue_closed> <issue_comment>username_1: yes.. the issue template you skipped filling out was pretty obvious. Instead, you've opted to provide nearly zero information, so there is no chance of diagnosing. <issue_comment>username_0: [2018-07-05 14:37:05,680] X1/INFO/locust.main: Starting Locust 0.8.1 [2018-07-05 14:37:05,680] X1/INFO/locust.runners: Hatching and swarming 1 clients at the rate 1 clients/s... [2018-07-05 14:37:06,683] X1/INFO/locust.runners: All locusts hatched: WebsiteUser: 1 [2018-07-05 14:37:06,683] X1/INFO/locust.runners: Resetting stats [2018-07-05 14:37:07,832] X1/INFO/locust.runners: All locusts dead [2018-07-05 14:37:07,835] X1/INFO/locust.main: Shutting down (exit code 0), bye. Name # reqs # fails Avg Min Max | Median req/s -------------------------------------------------------------------------------------------------------------------------------------------- GET / 2 0(0.00%) 59 59 59 | 59 0.00 -------------------------------------------------------------------------------------------------------------------------------------------- Total 2 0(0.00%) 0.00 Percentage of the requests completed within given times Name # reqs 50% 66% 75% 80% 90% 95% 98% 99% 100% -------------------------------------------------------------------------------------------------------------------------------------------- GET / 2 59 59 59 59 59 59 59 59 59 -------------------------------------------------------------------------------------------------------------------------------------------- <issue_comment>username_1: you are trying to run a test that sends 2 requests? <issue_comment>username_0: It's the smallest # to reproduce behavior. Any # show same result. <issue_comment>username_2: @username_0 you can probably use this https://github.com/locustio/locust/issues/583 to solve this, locust resets the requests once the clients are finished hatching but it's possible that your entire test runs too fast. <issue_comment>username_0: @username_2, Worked like a charm, Thanks! πŸ‘ <issue_comment>username_0: My bad. Looks like it’s hit or miss. Sometimes it returns good stats and sometimes it is all zeros with same exact use case --clients=300 --num-request=500 But work right with --clients=1 every time, tho. Any idea, what else should I try? <issue_comment>username_3: Facing same issue in current version of locust - `2.8.3`
{'fraction_non_alphanumeric': 0.24548969072164947, 'fraction_numerical': 0.0666881443298969, 'mean_word_length': 1.1850809289232935, 'pattern_counts': {'":': 0, '<': 11, '<?xml version=': 0, '>': 13, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 14, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '30750335', 'n_tokens_mistral': 869, 'n_tokens_neox': 736, 'n_words': 273}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Plot with randomColor username_0: <!-- This issue tracker is for bugs and feature requests in the Shiny package. If you're having trouble with Shiny Server or a related package, please file an issue in the appropriate repository. If you're having trouble with shinyapps.io, and you have a paid account (Starter, Basic, Standard, or Pro), please file a support ticket via https://support.rstudio.com. If you have a Free account, please post to the RStudio Community with the shinyappsio tag: https://community.rstudio.com/tags/shinyappsio. Finally, if you are an RStudio customer and are having trouble with one of our Pro products, get in touch with our support team at <EMAIL>. Before you file an issue, please upgrade to the latest version of Shiny from CRAN and confirm that the problem persists. # First, restart R. # To install latest shiny from CRAN: install.packages("shiny") See our guide to writing good bug reports for further guidance: https://github.com/rstudio/shiny/wiki/Writing-Good-Bug-Reports. The better your report is, the likelier we are to be able to reproduce and ultimately solve it. --> ### System details Browser Version: Firefox 78.5.0esr (64 bits) Output of `sessionInfo()`: ``` R version 4.0.3 (2020-10-10) Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows 10 x64 (build 19042) Matrix products: default locale: [1] LC_COLLATE=French_France.1252 LC_CTYPE=French_France.1252 LC_MONETARY=French_France.1252 LC_NUMERIC=C [5] LC_TIME=French_France.1252 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] randomcoloR_1.1.0.1 shiny_1.6.0 RPostgres_1.3.1 loaded via a namespace (and not attached): [1] Rcpp_1.0.6 jquerylib_0.1.3 compiler_4.0.3 bslib_0.2.4 later_1.1.0.1 tools_4.0.3 digest_0.6.27 [8] bit_4.0.4 jsonlite_1.7.2 Rtsne_0.15 lifecycle_0.2.0 pkgconfig_2.0.3 rlang_0.4.10 rstudioapi_0.13 [15] DBI_1.1.1 curl_4.3 xfun_0.20 fastmap_1.1.0 withr_2.4.0 stringr_1.4.0 cluster_2.1.0 [22] sass_0.3.1 vctrs_0.3.6 hms_1.0.0 bit64_4.0.5 R6_2.5.0 blob_1.2.1 magrittr_2.0.1 [29] scales_1.1.1 promises_1.1.1 ellipsis_0.3.1 htmltools_0.5.1.1 mime_0.9 xtable_1.8-4 colorspace_2.0-0 [36] httpuv_1.5.5 V8_3.4.0 tinytex_0.29 stringi_1.5.3 munsell_0.5.0 cachem_1.0.1 crayon_1.3.4 ``` ### Example application *or* steps to reproduce the problem <!-- If you're able to create one, a reproducible example is extremely helpful to us. For instructions on how to create one, please see: https://github.com/rstudio/shiny/wiki/Creating-a-Reproducible-Example --> ```R library(shiny) library(randomcoloR) ui <- fluidPage( plotOutput( "plot1" )) server <- function(input, output) { output$plot1 <- renderPlot( plot( 1, 1, col = randomColor()) ) } shinyApp(ui = ui, server = server) ``` ### Describe the problem in detail The above code produces the error ``` Warning: Error in context_eval: <string conversion failed> 193: <Anonymous> ``` while this code in R produces a plot with a random color point: ``` library(randomcoloR) plot( 1, 1, col = randomColor()) ``` and the app with a defined color like `"#d8d345"` works fine in Shiny. <issue_comment>username_1: ─ Session info ───────────────────────────────────────────────────────────────── setting value version R version 4.0.3 (2020-10-10) os macOS Catalina 10.15.5 system x86_64, darwin17.0 ui RStudio language (EN) collate en_US.UTF-8 ctype en_US.UTF-8 tz America/Chicago date 2021-02-11 ─ Packages ───────────────────────────────────────────────────────────────────── package * version date lib source assertthat 0.2.1 2019-03-21 [1] CRAN (R 4.0.0) bslib 0.2.4.9001 2021-02-09 [1] Github (rstudio/bslib@f613324) cachem 1.0.3 2021-02-04 [1] standard (@1.0.3) cli 2.3.0 2021-01-31 [1] CRAN (R 4.0.2) cluster 2.1.0 2019-06-19 [2] CRAN (R 4.0.3) colorspace 2.0-0 2020-11-11 [1] CRAN (R 4.0.2) curl 4.3 2019-12-02 [1] CRAN (R 4.0.0) digest 0.6.27 2020-10-24 [1] CRAN (R 4.0.2) ellipsis 0.3.1 2020-05-15 [1] CRAN (R 4.0.0) fastmap 1.1.0 2021-01-25 [1] CRAN (R 4.0.3) glue 1.4.2 2020-08-27 [1] CRAN (R 4.0.2) htmltools 0.5.1.1 2021-01-22 [1] CRAN (R 4.0.2) httpuv 1.5.5 2021-01-13 [1] CRAN (R 4.0.2) jquerylib 0.1.3 2020-12-17 [1] standard (@0.1.3) jsonlite 1.7.2 2020-12-09 [1] CRAN (R 4.0.2) later 1.1.0.9000 2021-01-07 [1] Github (r-lib/later@eb2c8ad) lifecycle 0.2.0 2020-03-06 [1] CRAN (R 4.0.0) magrittr 2.0.1 2020-11-17 [1] CRAN (R 4.0.2) mime 0.9 2020-02-04 [1] CRAN (R 4.0.0) munsell 0.5.0 2018-06-12 [1] CRAN (R 4.0.2) promises 1.1.1.9001 2021-01-14 [1] Github (rstudio/promises@55ca04f) R6 2.5.0 2020-10-28 [1] CRAN (R 4.0.2) randomcoloR * 1.1.0.1 2019-11-24 [1] CRAN (R 4.0.2) Rcpp 1.0.6 2021-01-15 [1] CRAN (R 4.0.2) rlang 0.4.10 2020-12-30 [1] CRAN (R 4.0.2) Rtsne 0.15 2018-11-10 [1] CRAN (R 4.0.2) sass 0.3.1 2021-01-24 [1] CRAN (R 4.0.2) scales 1.1.1 2020-05-11 [1] CRAN (R 4.0.2) sessioninfo 1.1.1 2018-11-05 [1] CRAN (R 4.0.0) shiny * 1.6.0.9000 2021-02-09 [1] local stringi 1.5.3 2020-09-09 [1] CRAN (R 4.0.2) stringr 1.4.0 2019-02-10 [1] CRAN (R 4.0.0) V8 3.4.0 2020-11-04 [1] CRAN (R 4.0.2) withr 2.4.1 2021-01-26 [1] CRAN (R 4.0.2) xtable 1.8-4 2019-04-21 [1] CRAN (R 4.0.0) yaml 2.2.1 2020-02-01 [1] CRAN (R 4.0.0) [1] /Users/winston/R/4.0 [2] /Library/Frameworks/R.framework/Versions/4.0/Resources/library ``` <issue_comment>username_0: OK! I'm not the only one, though : [I reproduced it from a question on SO](https://stackoverflow.com/questions/66132096/using-randomcolor-in-r-4-0-3-shiny-1-6-0) <issue_comment>username_1: I wonder if `randomColor()` is returning something strange. If you add `print(randomColor())` to the `renderPlot()`, it might help figure out what the problem is. Or you could put a `browser()` in there and run `randomColor()` at the R console when it goes to the debugging prompt. <issue_comment>username_0: ``` Called from: renderPlot(...) Browse[1]> randomColor() Error in context_eval(join(src), private$context, serialize) : <string conversion failed> ``` <issue_comment>username_1: It looks like that error is coming from V8 when it tries to evaluate the string representing JavaScript code: https://github.com/username_2/V8/blob/e2b50a72bcd3b7847ae07c7465b6d8472bf49a39/src/bindings.cpp#L26 https://github.com/username_2/V8/blob/81701ca2a833141caa7b22299c67410d6a1b4264/R/V8.R#L139 At the browser prompt, what happens if you run this? ```R randomcoloR:::ct$eval("1 + 1") ``` It should use randomcoloR's v8 object to evaluate javascript code, and print out `[1] "2"`. Seems very strange that the error would happen inside of a Shiny app but not outside. @username_2 any ideas? <issue_comment>username_0: ``` randomcoloR:::ct$eval("1 + 1") ``` produces same error <issue_comment>username_2: @username_0 works for me on Windows. So this fails for you? ![Screen Shot 2021-02-11 at 10 48 05 PM](https://user-images.githubusercontent.com/216319/107702935-41968600-6cbb-11eb-95ea-c23a6558c385.png) <issue_comment>username_0: I haven't installed it myself, but my customer installed it for me 2 months ago, and I installed R less than a month ago: Γ‰dition Windows 10 Professionnel Version 20H2 InstallΓ© le β€Ž16/β€Ž12/β€Ž2020 Version du systΓ¨me d’exploitation 19042.685 ExpΓ©rience Windows Feature Experience Pack 120.2212.551.0 <issue_comment>username_0: But @username_2 it fails only within the shiny app <issue_comment>username_1: If you installed the package two months ago and the installed a newer version of R after that, it could cause problems. I suggest reinstalling your packages. More in-depth information here, if you’re interested: https://shiny.rstudio.com/articles/upgrade-R.html <issue_comment>username_0: I installed windows 2 month ago, R and shiny 1 month ago and randomcoloR yesterday. <issue_comment>username_2: So you are saying that `randomcoloR:::ct$eval("1 + 1")` works in RGui or RStudio but, but not within shiny? <issue_comment>username_0: Exactly @username_2 ! in RStudio ![image](https://user-images.githubusercontent.com/13158585/107708069-fce0ff80-6c66-11eb-8df2-69f47344c167.png) in renderPlot ![image](https://user-images.githubusercontent.com/13158585/107708012-e5097b80-6c66-11eb-9e93-416ac63f25e0.png) <issue_comment>username_2: When did this start happening? Or did it never work? <issue_comment>username_0: It seems it works fine on mac, and on older R: [I was trying to help on SO](https://stackoverflow.com/questions/66132096/using-randomcolor-in-r-4-0-3-shiny-1-6-0) <issue_comment>username_1: At the Browse prompt in the Shiny app, what happens if you run: ```R ct <- V8::v8() ct$eval("1+1") ``` It should print out `"2"` again. The purpose of testing this is that it uses V8 directly and avoids randomcoloR. And also, the output of this would be useful: ```R Encoding("1+1") ``` On my Windows computer, it prints out `"unknown"`. <issue_comment>username_2: Yes I can reproduce it now on Windows. It only happens if v8 is called inside `renderPlot()`. I have no idea why that would be the case. ```r library(shiny) ctx <- V8::v8() # Running JS in renderText works ui <- fluidPage( textOutput("mytext")) server <- function(input, output) { output$mytext <- renderText(ctx$eval('"Hello World"')) } shinyApp(ui = ui, server = server) # Running JS in renderPlot errors ui <- fluidPage( plotOutput("myplot")) server <- function(input, output) { output$myplot <- renderPlot(plot(1, main = ctx$eval('"Hello World"'))) } shinyApp(ui = ui, server = server) ``` <issue_comment>username_1: - Session info -------------------------------------------------------------------- setting value version R version 3.6.3 (2020-02-29) os Windows 10 x64 system x86_64, mingw32 ui RStudio language (EN) collate English_United States.1252 ctype English_United States.1252 tz America/Chicago date 2021-02-11 - Packages ------------------------------------------------------------------------ package * version date lib source assertthat 0.2.1 2019-03-21 [1] CRAN (R 3.6.1) bslib 0.2.4 2021-01-25 [1] CRAN (R 3.6.3) cachem 1.0.3 2021-02-04 [1] CRAN (R 3.6.3) Cairo 1.5-12.2 2020-07-07 [1] CRAN (R 3.6.3) cli 2.3.0 2021-01-31 [1] CRAN (R 3.6.3) curl 4.3 2019-12-02 [1] CRAN (R 3.6.1) digest 0.6.27 2020-10-24 [1] CRAN (R 3.6.3) ellipsis 0.3.1 2020-05-15 [1] CRAN (R 3.6.3) fastmap 1.1.0 2021-01-25 [1] CRAN (R 3.6.3) glue 1.4.2 2020-08-27 [1] CRAN (R 3.6.3) htmltools 0.5.1.1 2021-01-22 [1] CRAN (R 3.6.3) httpuv 1.5.5 2021-01-13 [1] standard (@1.5.5) jquerylib 0.1.3 2020-12-17 [1] CRAN (R 3.6.3) jsonlite 1.7.2 2020-12-09 [1] CRAN (R 3.6.3) later 1.1.0.1 2020-06-05 [1] CRAN (R 3.6.3) lifecycle 0.2.0 2020-03-06 [1] CRAN (R 3.6.1) magrittr 2.0.1 2020-11-17 [1] CRAN (R 3.6.3) mime 0.9 2020-02-04 [1] CRAN (R 3.6.2) packrat 0.5.0 2018-11-14 [1] CRAN (R 3.6.3) promises 1.1.1 2020-06-09 [1] CRAN (R 3.6.3) R6 2.5.0 2020-10-28 [1] CRAN (R 3.6.3) Rcpp 1.0.6 2021-01-15 [1] CRAN (R 3.6.3) rlang 0.4.10 2020-12-30 [1] CRAN (R 3.6.3) rsconnect 0.8.16-9002 2021-01-13 [1] Github (rstudio/rsconnect@200f599) sass 0.3.1 2021-01-24 [1] CRAN (R 3.6.3) sessioninfo 1.1.1 2018-11-05 [1] CRAN (R 3.6.1) shiny * 1.6.0 2021-01-25 [1] CRAN (R 3.6.3) V8 3.4.0 2020-11-04 [1] CRAN (R 3.6.3) withr 2.4.1 2021-01-26 [1] CRAN (R 3.6.3) xtable 1.8-4 2019-04-21 [1] CRAN (R 3.6.1) ``` @username_2, does it happen with `renderPrint()`? I ask because it and `renderPlot()` use `capture.output()`, but `renderText()` does not. <issue_comment>username_2: Yes it only happens on R 4.0+ on Windows, and only inside `renderPlot()`. Not in `renderPrint()`. Perhaps some interaction with the changes in the graphics device in R 4.0 ? Does renderPlot set a locale or anything that could affect the c++ runtime? <issue_comment>username_1: Most of the plotting code happens in the `plotPNG()` function in https://github.com/rstudio/shiny/blob/master/R/imageutils.R I don't think there's any locale stuff that we do with plots. I believe it basically does something like: ```R png('file.png') plot.new() # Plotting code here dev.off() ``` <issue_comment>username_0: I tried `ct <- V8::v8()` at browser prompt in shiny renderPlot => It crashes my session <issue_comment>username_1: Listening on http://0.0.0.0:8000 Exception thrown during bootstrapping *** caught segfault *** address (nil), cause 'memory not mapped' Traceback: 1: make_context(private$console) 2: reset() 3: (function() { eval <- function(src, serialize = FALSE) { evaluate_js(src, serialize = serialize) } validate <- function(src) { context_validate(join(src), private$context) } call <- function(fun, ..., auto_unbox = TRUE) { stopifnot(is.character(fun)) stopifnot(this$validate(c("fun=", fun))) jsargs <- list(...) if (!is.null(names(jsargs))) { stop("Named arguments are not supported in JavaScript.") } jsargs <- vapply(jsargs, function(x) { if (is.raw(x)) { raw_to_js(x) } else if (is.atomic(x) && inherits(x, "JS_EVAL")) { as.character(x) } else { toJSON(x, auto_unbox = auto_unbox) } }, character(1)) jsargs <- paste(jsargs, collapse = ",") src <- paste0("(", fun, ")(", jsargs, ");") get_json_output(evaluate_js(src, serialize = TRUE)) } source <- function(file) { if (is.character(file) && length(file) == 1 && grepl("^https?://", file)) { file <- curl(file, open = "r") on.exit(close(file)) } evaluate_js(readLines(file, encoding = "UTF-8", warn = FALSE)) } get <- function(name, ...) { stopifnot(is.character(name)) get_json_output(evaluate_js(name, serialize = TRUE), ...) } assign <- function(name, value, auto_unbox = TRUE, ...) { stopifnot(is.character(name)) obj <- if (is.raw(value)) { write_array_buffer(name, value, private$context) } else if (inherits(value, "JS_EVAL")) { invisible(evaluate_js(paste("var", name, "=", value))) } else { invisible(evaluate_js(paste("var", name, "=", toJSON(value, auto_unbox = auto_unbox, ...)))) } } reset <- function() { private$context <- make_context(private$console) private$created <- Sys.time() if (length(global)) { context_eval(paste("var", global, "= this;", collapse = "\n"), private$context) } if (isTRUE(typed_arrays)) { context_enable_typed_arrays(private$context) } invisible() } console <- function() { this$eval("") message("This is V8 version ", version(), ". Press ESC or CTRL+C to exit.") on.exit(message("Exiting V8 console.")) buffer <- character() has_history <- !inherits(try(savehistory(tempfile()), silent = T), "try-error") if (has_history) { savehistory() on.exit(loadhistory(), add = TRUE) histfile <- ".V8history" if (file.exists(histfile)) { loadhistory(histfile) } else { file.create(histfile) } } rc.options(custom.completer = function(env) { env$comps <- tab_complete(this, env$token) }) on.exit({ rc.options(custom.completer = NULL) }, add = TRUE) repeat { prompt <- ifelse(length(buffer), " ", "~ ") if (nchar(line <- readline(prompt))) { buffer <- c(buffer, line) } if (identical(buffer, "exit")) break if (length(buffer) && (this$validate(buffer) || !nchar(line))) { if (has_history) { write(buffer, histfile, append = TRUE) loadhistory(histfile) } tryCatch(cat(undefined_to_null(this$eval(buffer))), error = function(e) { message(e$message) }) buffer <- character() } } } reset() lockEnvironment(environment(), TRUE) structure(environment(), class = c("V8", "environment"))})() 4: V8::v8() 5: renderPlot(...) 6: ..stacktraceon..(renderPlot(...)) 7: func() ``` <issue_comment>username_1: @username_2, I suspect that there's a bug in V8 (although of course it's possible that there's a memory bug elsewhere that is only showing up when V8 is used). This app runs without error with `RDsan` in the container from above. However, if you uncomment the first `v8()` call, then there will be a segfault at the second `v8()`. ```R RDsan library(shiny) # V8::v8() # <-------- Uncommenting this line causes crash below ui <- fluidPage( plotOutput("myplot")) server <- function(input, output) { output$myplot <- renderPlot({ ctx <- V8::v8() # <------- Crashes here ctx$eval('"Hello World"') plot(1) }) } shinyApp(ui = ui, server = server, options = list(host='0.0.0.0', port=8000, launch.browser = FALSE)) ``` The stack trace: ``` /usr/include/v8/v8.h:8921:22: runtime error: member call on null pointer of type 'struct Context' #0 0x7fa88b508141 in v8::Context::Scope::Scope(v8::Local<v8::Context>) /usr/include/v8/v8.h:8921 #1 0x7fa88b508141 in make_context(bool) /tmp/Rtmp9XSLSj/R.INSTALL4e1e32344a87/V8/src/bindings.cpp:306 #2 0x7fa88b4bbb34 in _V8_make_context /tmp/Rtmp9XSLSj/R.INSTALL4e1e32344a87/V8/src/RcppExports.cpp:75 ... ``` The error happens here in v8.h: https://github.com/v8/v8/blob/6.8.275.32/include/v8.h#L8967 Note that the line number here (8967) is not the same as the one reported in the stack trace, (8921) even though these say they're from the same of v8 according to `v8-version.h`. I think this is probably because the one installed on the Docker image has Ubuntu-specific patches. <issue_comment>username_3: @username_1 It looks like I'm able to run the app you posted in shiny built on ubuntu 18.04 but not with the same R version built on 20.04. But instead of crashing it returns the error ```Warning: Error in context_eval: <string conversion failed>``` which is the same error I get trying to plot something that relies on V8 in Rstudio (not shiny). ``` FROM rocker/r-ver:4.0.0-ubuntu18.04 # FROM rocker/r-ver:4.0.0 RUN /rocker_scripts/install_shiny_server.sh RUN R -e "Sys.setenv(DOWNLOAD_STATIC_LIBV8 = 1); \ install.packages('V8', repos = 'https://cran.r-project.org')" EXPOSE 3838 CMD ["/init"] ``` Dont know if thats helpful. <issue_comment>username_2: This should be fixed in V8 3.6.0, now on CRAN. The problem was that we were hitting the JS stack limit. We have now relaxed the stack limit and also provide better error messages when we do hit it.
{'fraction_non_alphanumeric': 0.13072134749176126, 'fraction_numerical': 0.08289088246063712, 'mean_word_length': 1.8866428854538249, 'pattern_counts': {'":': 0, '<': 77, '<?xml version=': 0, '>': 37, 'https://': 15, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 13, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 3, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '15199425', 'n_tokens_mistral': 8320, 'n_tokens_neox': 7152, 'n_words': 2034}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: can't train my datasets username_0: File "D:/mp_code/smog_faster_rcnn/train.py", line 154, in train blobs = self.data_layer.forward() File "D:\mp_code\smog_faster_rcnn\lib\layer_utils\roi_data_layer.py", line 75, in forward blobs = self._get_next_minibatch() File "D:\mp_code\smog_faster_rcnn\lib\layer_utils\roi_data_layer.py", line 71, in _get_next_minibatch return get_minibatch(minibatch_db, self._num_classes) File "D:\mp_code\smog_faster_rcnn\lib\utils\minibatch.py", line 36, in get_minibatch im_blob, im_scales = _get_image_blob(roidb, random_scale_inds) File "D:\mp_code\smog_faster_rcnn\lib\utils\minibatch.py", line 70, in _get_image_blob im = cv2.imread(roidb[i]['image']) KeyError: 'image' I had seen the same questions, but i still can't solve it, I try to my data as yours, can you help me? <issue_comment>username_1: As I answered in the previous question this depends on you dataset, probably it is not good for this net. The only way to solve it is to add the missing key in each xml file <issue_comment>username_0: <?xml version="1.0"?> -<annotation> <folder>VOC2007</folder> <filename>000001.jpg</filename> -<source> <database>The VOC2007 Database</database> <annotation>PASCAL VOC2007</annotation>  </source> -<owner> <flickrid>Fried Camels</flickrid> <name><NAME></name> </owner> -<size> <width>704</width> <height>576</height> <depth>3</depth> </size> <segmented>0</segmented> -<object> <name>smog</name> <pose>Unspecified</pose> <truncated>0</truncated> <difficult>0</difficult> -<bndbox> <xmin>83</xmin> <ymin>335</ymin> <xmax>148</xmax> <ymax>396</ymax> </bndbox> </object> </annotation> this is my xml. and i only recognite one class,I can't find my datasets where is wrong??!! <issue_comment>username_1: Did you open it with a text editor (notepad if Windows or kate on linux) to paste the content here? If not, do it and paste the content here <issue_comment>username_0: <annotation> <folder>VOC2007</folder> <filename>000007.jpg</filename> <source> <database>The VOC2007 Database</database> <annotation>PASCAL VOC2007</annotation>  </source> <owner> <flickrid>Fried Camels</flickrid> <name>J<NAME></name> </owner> <size> <width>704</width> <height>576</height> <depth>3</depth> </size> <segmented>0</segmented> <object> <name>smog</name> <pose>Unspecified</pose> <truncated>0</truncated> <difficult>0</difficult> <bndbox> <xmin>176</xmin> <ymin>260</ymin> <xmax>209</xmax> <ymax>293</ymax> </bndbox> </object> <object> <name>smog</name> <pose>Unspecified</pose> <truncated>0</truncated> <difficult>0</difficult> <bndbox> <xmin>221</xmin> <ymin>260</ymin> <xmax>250</xmax> <ymax>296</ymax> </bndbox> </object> </annotation> <issue_comment>username_1: Again, the keys are missing. I need it in the form ```xml <key1> <key2>value<\key2> etc. </key1> <issue_comment>username_0: <annotation> <folder>VOC2007</folder> <filename>000007.jpg</filename> <source> <database>The VOC2007 Database</database> <annotation>PASCAL VOC2007</annotation>  </source> <owner> <flickrid>Fried Camels</flickrid> <name>Jinky the Fruit Bat</name> </owner> <size> <width>704</width> <height>576</height> <depth>3</depth> </size> <segmented>0</segmented> <object> <name>smog</name> <pose>Unspecified</pose> <truncated>0</truncated> <difficult>0</difficult> <bndbox> <xmin>176</xmin> <ymin>260</ymin> <xmax>209</xmax> <ymax>293</ymax> </bndbox> </object> <object> <name>smog</name> <pose>Unspecified</pose> <truncated>0</truncated> <difficult>0</difficult> <bndbox> <xmin>221</xmin> <ymin>260</ymin> <xmax>250</xmax> <ymax>296</ymax> </bndbox> </object> </annotation> think you <issue_comment>username_0: <annotation> <folder>VOC2007</folder> <filename>000007.jpg</filename> <source> <database>The VOC2007 Database</database> <annotation>PASCAL VOC2007</annotation>  </source> <owner> <flickrid>Fried Camels</flickrid> <name><NAME></name> </owner> <size> <width>704</width> <height>576</height> <depth>3</depth> </size> <segmented>0</segmented> <object> <name>smog</name> <pose>Unspecified</pose> <truncated>0</truncated> <difficult>0</difficult> <bndbox> <xmin>176</xmin> <ymin>260</ymin> <xmax>209</xmax> <ymax>293</ymax> </bndbox> </object> <object> <name>smog</name> <pose>Unspecified</pose> <truncated>0</truncated> <difficult>0</difficult> <bndbox> <xmin>221</xmin> <ymin>260</ymin> <xmax>250</xmax> <ymax>296</ymax> </bndbox> </object> </annotation> <issue_comment>username_0: ``` XML <annotation> <folder>VOC2007</folder> <filename>000007.jpg</filename> <source> <database>The VOC2007 Database</database> <annotation>PASCAL VOC2007</annotation>  </source> <owner> <flickrid>Fried Camels</flickrid> <name><NAME></name> </owner> <size> <width>704</width> <height>576</height> <depth>3</depth> </size> <segmented>0</segmented> <object> <name>smog</name> <pose>Unspecified</pose> <truncated>0</truncated> <difficult>0</difficult> <bndbox> <xmin>176</xmin> <ymin>260</ymin> <xmax>209</xmax> <ymax>293</ymax> </bndbox> </object> <object> <name>smog</name> <pose>Unspecified</pose> <truncated>0</truncated> <difficult>0</difficult> <bndbox> <xmin>221</xmin> <ymin>260</ymin> <xmax>250</xmax> <ymax>296</ymax> </bndbox> </object> </annotation> ``` <issue_comment>username_1: The xml file is correct, so it should work Make sure you placed the files in the right location (see https://github.com/dBeker/Faster-RCNN-TensorFlow-Python3/issues/82) <issue_comment>username_2: when i run the train.py,TypeError: argument of type 'NoneType' is not iterable <issue_comment>username_3: Has your problem been solved? I encountered this problem too.
{'fraction_non_alphanumeric': 0.14538062526286274, 'fraction_numerical': 0.040375718491518296, 'mean_word_length': 2.325874125874126, 'pattern_counts': {'":': 0, '<': 352, '<?xml version=': 1, '>': 352, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 5}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '25628776', 'n_tokens_mistral': 2784, 'n_tokens_neox': 2490, 'n_words': 457}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>username_0: <issue_comment>username_0: Dependabot couldn't find a <anything>.(cs|vb|fs)proj for this project. Dependabot requires a <anything>.(cs|vb|fs)proj to evaluate your project's current .NET dependencies. It had expected to find one at the path: `/<anything>.(cs|vb|fs)proj`. If this isn't a .NET project, or if it is a library, you may wish to disable updates for it from within [Dependabot](https://app.dependabot.com). You can mention @dependabot in the comments below to contact the Dependabot team.<issue_closed> <issue_comment>username_0: test
{'fraction_non_alphanumeric': 0.10204081632653061, 'fraction_numerical': 0.00510204081632653, 'mean_word_length': 5.693181818181818, 'pattern_counts': {'":': 0, '<': 8, '<?xml version=': 0, '>': 8, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '6471856', 'n_tokens_mistral': 189, 'n_tokens_neox': 177, 'n_words': 69}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: what is the intended constrain behaviour? username_0: https://github.com/mapbox/mapbox-gl-native/blob/35b579fde196ec3dd2a487ab5d528c8662571a43/src/mbgl/map/transform_state.cpp#L297-L299 ```cpp scale_ = util::max(scale_, static_cast<double>((rotatedNorth() ? height : width) / util::tileSize), static_cast<double>((rotatedNorth() ? width : height) / util::tileSize)); ``` looks like it's equivalent to ```cpp scale_ = util::max(scale_, static_cast<double>(width / util::tileSize), static_cast<double>(height / util::tileSize)); ``` @username_1 Is this the intended behaviour? It's constraining based on the width even if the constrain mode is `HeightOnly`. <issue_comment>username_1: Yes, I made this change in #3401, so that the only difference between the two modes is that WidthAndHeight prevents you from seeing both sides of the antimeridian at a time, while HeightOnly allows scrolling seamlessly across it. The current HeightOnly behavior matches other SDKs and avoids the situation where the same coordinate appears twice on screen. If there is a need for tiling the world multiple times, we should add another constraining mode. <issue_comment>username_1: I suppose I could've fixed that issue by turning off all constraints, but we don't show anything beyond certain latitudes.<issue_closed>
{'fraction_non_alphanumeric': 0.08206896551724138, 'fraction_numerical': 0.027586206896551724, 'mean_word_length': 3.8691275167785233, 'pattern_counts': {'":': 0, '<': 9, '<?xml version=': 0, '>': 9, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '27569359', 'n_tokens_mistral': 433, 'n_tokens_neox': 380, 'n_words': 147}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Context in hooks username_0: Guys I love recent implementation of context <3. Amazing job! I also wonder how would you feel / whats you opinion on having context also as part of hooks. ```go type Model struct {} func (*Model) BeforeInsertCtx(ctx context.Context, db orm.DB) error { return nil } ``` or is it somehow possible to get it out of orm.DB interface?<issue_closed> <issue_comment>username_0: @username_1 you are legend! Where is donate button? Give me that donate button!
{'fraction_non_alphanumeric': 0.075046904315197, 'fraction_numerical': 0.0075046904315197, 'mean_word_length': 4.287128712871287, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '1582310', 'n_tokens_mistral': 163, 'n_tokens_neox': 151, 'n_words': 72}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Consolidate Extended Types username_0: As per this comment: https://github.com/username_1/Infinity.jl/pull/10#issuecomment-639109854 A few of the extended types functions could be put into an "infextendedcommon" sub folder so that we don't have as much code duplication. I'd like to open up a PR that attempts to remove as much code duplication from the different types. <issue_comment>username_1: Sounds good. This will make adding new types easier for #14.<issue_closed>
{'fraction_non_alphanumeric': 0.0625, 'fraction_numerical': 0.03125, 'mean_word_length': 5.180722891566265, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '17475852', 'n_tokens_mistral': 152, 'n_tokens_neox': 137, 'n_words': 65}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: DDC giving numerous cast errors username_0: Since moving to DDC I've been getting a having a lot of cast exceptions. Yesterday I rewrote a chunk of code to avoid this error: ``` CastError: Casting value of type '_Future<BaseModel>' to type 'Future<SearchPublicProfileSummaryResponse>' which is incompatible ``` I would have expected it to work as SearchPublicProfileSummaryResponse extends BaseModel. I worked around the error by making the queueQuery method generic: `Future<T> queueQuery<T>(..) async { … }` and then using: `app.queueQuery<SearchPublicProfileSummaryResponse>(…)` to query it. Today I keep seeing these messages crop up in the console: ``` dart_sdk.js:5848 Ignoring cast fail from ImmutableMap to Map<Type, Serializer> ``` ``` dart_sdk.js:5848 Ignoring cast fail from ImmutableMap to Map<String, Serializer> ``` ``` dart_sdk.js:5848 Ignoring cast fail from ImmutableMap to Map<FullType, Function> ``` ``` dart_sdk.js:5848 Ignoring cast fail from JSArray to List<int> ``` ``` dart_sdk.js:5848 Ignoring cast fail from _Future to Future<MessagingConversationsResponse> ``` All of these exceptions seem to be with built types failing to cast to regular types? <issue_comment>username_1: The error with `Future` is because it's trying to cast a `_Future`, which is not a `Future`. It's possible something could be improved in the SDK here, but I don't think it relates to `built_value`. The messages related to ImmutableMap look like they could be related to `built_value`. Is there by any chance a stack trace associated so I can see where in the `built_value` code they're coming from? Thanks. <issue_comment>username_1: Looks like there is an issue with the default map type in MapBuilder: ``` factory MapBuilder([map = const {}]) { ``` -- this is where `ImmutableMap` without a type is coming from. I'll take a look. <issue_comment>username_1: We also need to handle the case where the user passes in a `<dynamic>{}`. <issue_comment>username_2: No, the message isn't clear, but it's because of the type argument, not because of the private `_Future` class, which does implement `Future`. It's warning you that a cast from `Future<dynamic>` to `Future<MessagingConversationsResponse>` is being ignored. In Dart 2, that's a runtime error. <issue_comment>username_1: @username_2 that's the case for this error ``` dart_sdk.js:5848 Ignoring cast fail from _Future to Future<MessagingConversationsResponse> ``` --I was talking about the first one, sorry I wasn't clear ``` CastError: Casting value of type '_Future<BaseModel>' to type 'Future<SearchPublicProfileSummaryResponse>' which is incompatible ``` but actually I see I was wrong about that one. Rather the problem is that it's a downcast of the generic which is not allowed.<issue_closed>
{'fraction_non_alphanumeric': 0.08082575227431771, 'fraction_numerical': 0.01119664100769769, 'mean_word_length': 4.846625766871166, 'pattern_counts': {'":': 0, '<': 24, '<?xml version=': 0, '>': 24, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '29123653', 'n_tokens_mistral': 864, 'n_tokens_neox': 788, 'n_words': 372}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Doing CDC with Dolt username_0: Exploring uses cases with Dolt. Maybe the hub already does a change to the data is just like CDC, and git Servers tell you when someone makea commit. But with a CDC you want to be able to query and get an event when the data that is part of the Query Result Set changes. PS. Polling is cheating ! <issue_comment>username_1: @username_0 to clarify, is this a feature request? I see two types of CDC use-case for Dolt: 1. a downstream Dolt or other kind of process is capturing changes to a Dolt repo, and thus wants to be notified in the way you are seem to be implying 2. Dolt a repository is capturing changes to the data returned by some endpoint, say an API, and we want to record a meaningful commit in the Dolt repo only when the data changes We currently support (2), and will have documented examples of how to instrument such a workflow in Python, but Dolt upstreams do not notify down-streams when they have changes. Let us know if you have thoughts on this, we are excited to discuss Dolt, and the value it can create. -Oscar <issue_comment>username_0: thinking about CDC.. Is the HUB code in this repo ? <issue_comment>username_1: The hub code is not open source, and we don't have any immediate plans to change that. By DC, do you mean capturing changes in some upstream data source? Can you give me an example? I have been working on some "cookbooks" for this sort of thing, so eager to hear what you have come across. <issue_comment>username_0: Sorry yes i mean CDC ( change data capture). Use case. You have a data type and you want to do subscriptions so you can tell all subscribers when the data changes ( typical a CUD = Create, Update, delete). <issue_comment>username_1: For Dolt the use-case you are describing implies that Dolt repositories are "live" processes running at some known address. In particular the repository being monitored would have to know its list of subscribers to achieve the "push" workflow you are alluding to, where changes are automatically percolated to dependent repos. I don't think this is feasible, or even desirable for Dolt. For DoltHub it's certainly possible that the application could look at Dolt repos that are configured with an upstream that is itself a DoltHub repo, and then wake them up, and trigger a `dolt pull origin <branch>` update when a commit is posted to `<branch>`. We do not currently support this, but we can see the utility, and we plan something of the sort. Does this answer your question?<issue_closed>
{'fraction_non_alphanumeric': 0.03837209302325582, 'fraction_numerical': 0.003875968992248062, 'mean_word_length': 4.203629032258065, 'pattern_counts': {'":': 0, '<': 10, '<?xml version=': 0, '>': 10, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '6138351', 'n_tokens_mistral': 681, 'n_tokens_neox': 634, 'n_words': 436}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: amp-iframe allow="microphone" username_0: ## How do we reproduce the issue? 1. access www.accordersaguitare.com with Chrome 2. open the Console and see the message ## What browsers are affected? Firefox doesn't warn about this. Safari doesn't even support `iframe allow="microphone"` yet, see: https://bugs.webkit.org/show_bug.cgi?id=167430 ## Which AMP version is affected? 1508896929494 <issue_comment>username_1: /to @username_2 ptal <issue_comment>username_2: related https://github.com/ampproject/amphtml/issues/11541<issue_closed> <issue_comment>username_0: Thanks!
{'fraction_non_alphanumeric': 0.10289389067524116, 'fraction_numerical': 0.04983922829581994, 'mean_word_length': 5.557894736842106, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 2, 'lorem ipsum': 0, 'www.': 1, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '16807710', 'n_tokens_mistral': 230, 'n_tokens_neox': 192, 'n_words': 56}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Allow range filters on columns with coercions username_0: The remaining work is to add support for filters on columns with coercions when these columns are only used for filtering and are not projected out. This change will be in a separate PR. ``` == NO RELEASE NOTE == ``` <issue_comment>username_1: Nice structure. LGTM. <issue_comment>username_0: @username_1 Turns out that coercion works for filter-only columns with no changes. This is because `HiveSplit#partitionSchemaDifference` includes all the columns with different schema, not just the ones projected out of the scan, and this is the data that's used to generate coercions. Added a test and removed a TODO.
{'fraction_non_alphanumeric': 0.05218617771509168, 'fraction_numerical': 0.005641748942172073, 'mean_word_length': 5.068376068376068, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '15272862', 'n_tokens_mistral': 185, 'n_tokens_neox': 173, 'n_words': 100}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: DeepImbalancedRegression username_0: @jrzaurin as we discussed this week: * I rebased the branch to attention_mlp * fixed issues after rebasing * renamed weight to lds_weight some tests are failing, but they are not related to DeepImbalancedRegression code, "E AttributeError: 'TabNet' object has no attribute 'embed_and_cont_dim'" in following tests: ``` FAILED tests/test_model_functioning/test_fit_methods.py::test_fit_objectives_tabnet[X_wide0-X_tab0-target0-regression-X_wide_test0-X_tab_test0-None-1-None-4] FAILED tests/test_model_functioning/test_fit_methods.py::test_fit_objectives_tabnet[X_wide1-X_tab1-target1-binary-X_wide_test1-X_tab_test1-None-1-2-3] FAILED tests/test_model_functioning/test_fit_methods.py::test_fit_objectives_tabnet[X_wide2-X_tab2-target2-multiclass-X_wide_test2-X_tab_test2-None-3-3-4] FAILED tests/test_model_functioning/test_fit_methods.py::test_fit_objectives_tabnet[X_wide3-X_tab3-target3-regression-None-None-X_test3-1-None-4] FAILED tests/test_model_functioning/test_fit_methods.py::test_fit_objectives_tabnet[X_wide4-X_tab4-target4-binary-None-None-X_test4-1-2-3] FAILED tests/test_model_functioning/test_fit_methods.py::test_fit_objectives_tabnet[X_wide5-X_tab5-target5-multiclass-None-None-X_test5-3-3-4] FAILED tests/test_model_functioning/test_miscellaneous.py::test_explain_mtx_and_feat_imp - AttributeEr... ``` <issue_comment>username_0: comments to latest commit: * fixed LDS/FDS code and adjusted correspoinding notebook according to new code * fixed code related to following test error: tests/test_model_functioning/test_fit_methods.py ......F.............F.. ; def test_fit_with_deephead(): def test_aliases(): more in pull request comments comments: 1. finetune uses trainloader, which outputs lds_weightt (None if with_lds=False); loss in training/_finetune row 265 could be computed same as in trainer(_train_step) row 1006/1007/1035 but for this the finetune object would have to be initialized with with_lds parameter * my opinion: leave LDS out of finetuning to keep it a bit more simple 2. "stop_after_finetuning" keyword was not commented in the master branch(leftover code?) and in my opinion is not doing anything (except the keyword is missing in current branch which makes issues...) so I removed it: training/trainer row 436: if finetune_args["stop_after_finetuning"]: print("Fine-tuning finished") return 3. removed support for(combination of?) deephead with FDS - again to keep it more simple models/wide_deep row 412 -> and added _add_pred_layer() into deephead if/else statement in row 183 @jrzaurin check if you agree with changes, if yes, then only thing left is to create some unit tests for LDS/FDS (bayessian models already have tests), after I create the tests we could finally merge this one ("fingers crossed" :) )
{'fraction_non_alphanumeric': 0.08109989557953359, 'fraction_numerical': 0.025060911938739994, 'mean_word_length': 5.576659038901602, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '14800127', 'n_tokens_mistral': 976, 'n_tokens_neox': 953, 'n_words': 256}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Input focused style should be cleared username_0: - [x] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate. ### Reproduction link [![Edit on CodeSandbox](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/antd-reproduction-template-forked-ph4s3?file=/index.js) ### Steps to reproduce 1. input something 2. press Enter 3. wait for 5 seconds 4. click other places ### What is expected? input should not have focused styles ### What is actually happening? input still have a highlight border | Environment | Info | |---|---| | antd | 4.16.13 | | React | 17.0.2 | | System | MacOS | | Browser | Chrome Latest | <!-- generated by ant-design-issue-helper. DO NOT REMOVE --> <issue_comment>username_1: ![Kapture 2021-11-02 at 15 38 24](https://user-images.githubusercontent.com/507615/139805012-eb6221e9-6673-43d6-9578-588e6ecdc1e4.gif) I can't reproduce in my machine.<issue_closed>
{'fraction_non_alphanumeric': 0.13120899718837864, 'fraction_numerical': 0.06466729147141519, 'mean_word_length': 3.7466666666666666, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 4, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '7993568', 'n_tokens_mistral': 407, 'n_tokens_neox': 351, 'n_words': 97}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Conditional_VAE.py does not work username_0: Can you fix the code please? <issue_comment>username_1: @username_0 Can you let me know where are you running the code. It should be run completely in a cpu or a gpu. `device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')` this part is essential. Make sure you are initializing the device properly. <issue_comment>username_2: Adding the line `model = model.to(device)` fixes this for me.<issue_closed>
{'fraction_non_alphanumeric': 0.08450704225352113, 'fraction_numerical': 0.008048289738430584, 'mean_word_length': 5.552631578947368, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '16680941', 'n_tokens_mistral': 147, 'n_tokens_neox': 144, 'n_words': 64}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: High contrast black focus is hard to see on default button username_0: ### Bug Report - __Browser and OS versions__: Edge Win10 #### Priorities and help requested (not applicable if asking question): Requested priority: Normal Products/sites affected: VSTS #### Describe the issue: Focus cannot be seen after navigating to a default button in high contrast black. Focused: ![image](https://user-images.githubusercontent.com/5342645/37421106-b4a12dd2-2775-11e8-802b-cd5d2e6f2342.png) Not Focused: ![image](https://user-images.githubusercontent.com/5342645/37421130-c02db742-2775-11e8-967c-386ffbee73ee.png) Repro can be seen on the office fabric components page <issue_comment>username_1: Thanks for reporting! I have a PR open to fix this here: #4221 , and we're tracking the bug here: #3419<issue_closed>
{'fraction_non_alphanumeric': 0.09111880046136102, 'fraction_numerical': 0.09573241061130335, 'mean_word_length': 4.786666666666667, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '29176451', 'n_tokens_mistral': 325, 'n_tokens_neox': 272, 'n_words': 90}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Find a way to convert Gltf primitive mesh and material to a simplified CPU buffer for rendering username_0: Currently, both Unreal and O3DE has a very similar way to load Gltf. That is it only cares about creating rendering resources from Gltf Primitive and Material. Even though the data structure for GPU resources maybe different. The CPU representation is almost similar with a `vector<vec3> positions`, `vector<vec3> normals`, `vector<vec3> tangents`, etc... If we can find a way to do that work in Cesium Native, the integration is simplified alot on Engine side as they just need to load all those CPU buffers into GPU. This observation is only based on Static Mesh. I'm not sure about Animation and other stuffs though
{'fraction_non_alphanumeric': 0.04204993429697766, 'fraction_numerical': 0.006570302233902759, 'mean_word_length': 4.906976744186046, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '19217565', 'n_tokens_mistral': 184, 'n_tokens_neox': 174, 'n_words': 121}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Migrate toolbox to stateful set for audit username_0: # Motivation and Context We need audit for toolbox. # What has changed Toolbox has become a stateful set. # How to test? Cross fingers etc. # Links Trello: https://trello.com/c/Y95bUvnJ <issue_comment>username_0: ![](https://github.trello.services/images/mini-trello-icon.png) [Tech/support: Audit logging of tools (5)](https://trello.com/c/Y95bUvnJ/256-tech-support-audit-logging-of-tools-5)
{'fraction_non_alphanumeric': 0.12348178137651822, 'fraction_numerical': 0.022267206477732792, 'mean_word_length': 4.43956043956044, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 3, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '10753384', 'n_tokens_mistral': 194, 'n_tokens_neox': 172, 'n_words': 43}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Add param strictness where possible. username_0: rebased https://github.com/propelorm/Propel2/pull/1831 WIP <issue_comment>username_1: @username_0 without digging in and taking a stab out if it, it sounds like the hydrator is mis-calculating the number of columns for a given model. This is one of the things I hate about AR/Propel, the strict ties to the number of columns defined and even the ordering I believe causes issues in places. My guess is that some typing is causing a column to not be included as part of hydration. <issue_comment>username_1: @username_0 haven't looked into the code, but this one is related to the number of columns in the select not matching up with the table. This is a common problem with Propel due to its design. Not sure why the typing caused this, but I guess the query builder isn't properly interpreting types. <issue_comment>username_0: For some reason it affects only PHP81 builds <issue_comment>username_1: For `StatementWrapper::execute` we should drop support for `$inputParameters` being nullable and work our way backwards from that. The issue appears to be in the execution of the prepared statement there. I'm assuming that `args` and `$inputParameters` are an issue. I don't know why without debugging it at runtime. But tightening up the typing will help pinpoint the issue. <issue_comment>username_0: Well, that didnt really work, broke all builds with really unclear messages. It seems to be quite cloaked, what the actual issue is. <issue_comment>username_1: Well, that would, likely break a lot - anything explicitly passing in null to `execute`. The default value would become an empty array though for all the cases where there aren't any arguments. But that's the point, to ensure that they're properly passing in an array. A null value being passed in is prime for errors in unforeseen ways. What's in your SQL logs from Propel? `callUserFunctionWithLogging` logs the error. You need to find out why there is a mismatch in the `args` being passed <issue_comment>username_0: @username_1 After https://github.com/propelorm/Propel2/pull/1841 merged in, we could then merge to master. Does all look good now? <issue_comment>username_1: @username_0 Overall, this looks great. There is obviously a ton of changes here. It'd be nice to have typed properties as well. I find it's easier to type methods and properties at the same time. However, seeing as this is already done, that'd be better in another PR. I added some comments. <issue_comment>username_0: Changes done as per review. Shall we continue? <issue_comment>username_0: @username_1 Do you want to make follow up PRs regarding more types? Especially also around generated code? <issue_comment>username_1: @username_0 I think we need to get property typing done.
{'fraction_non_alphanumeric': 0.0493131384290243, 'fraction_numerical': 0.010567101091933779, 'mean_word_length': 4.6913827655310625, 'pattern_counts': {'":': 0, '<': 13, '<?xml version=': 0, '>': 13, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '21890233', 'n_tokens_mistral': 753, 'n_tokens_neox': 721, 'n_words': 423}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Allow unlocking remote LND username_0: It would be nice if Zap iOS allowed you to authenticate and unlock a locked LND instance. This is possible with the desktop application and I use Zap to unlock my LNDs all the time <issue_comment>username_1: ok. how are we doing this? users have be able to enter a password and then tap on a button that says "Unlock". Should we save the password? Do we need a design? <issue_comment>username_0: Yeah definitely needs design. I'd say no on saving the passwords. I'll give all of this some thought, I just wanted to document it in an issue so I didn't forget πŸ˜„<issue_closed>
{'fraction_non_alphanumeric': 0.04447852760736196, 'fraction_numerical': 0.004601226993865031, 'mean_word_length': 4.533898305084746, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '15196713', 'n_tokens_mistral': 181, 'n_tokens_neox': 173, 'n_words': 108}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Enquiring about the login mechanism used in simple_salesforce username_0: As per the upcoming release of salesforce - "Salesforce Spring ’22 is coming out soon which includes enhancing some security features around the ability to log in via URLs that include credentials. Basically, Salesforce is disabling the ability to log in via a URL that includes a username and password in it." I wanted to confirm if simple_salesforce is doing this internally. <issue_comment>username_1: If I understand [the doc](https://help.salesforce.com/s/articleView?id=release-notes.rn_security_disable_login_credentials_in_query_string.htm&type=5&release=234) correctly, this module would use **no** query parameters of "un" and "pw".
{'fraction_non_alphanumeric': 0.062087186261558784, 'fraction_numerical': 0.010568031704095112, 'mean_word_length': 5.890909090909091, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '29908251', 'n_tokens_mistral': 207, 'n_tokens_neox': 192, 'n_words': 89}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: v0.05 rtsp not working username_0: 0:00:00.015519922 24856 0x560f6776ea00 INFO DSL DslSourceBintr.cpp:387:DecodeSourceBintr: : URI Path = rtsp:// 0:00:00.020504886 24856 0x560f6776ea00 INFO DSL DslServices.cpp:2191:SourceRtspNew: : New RTSP Source 'rtsp-source' created successfully 0:00:00.020537865 24856 0x560f6776ea00 INFO DSL DslServices.cpp:2534:PrimaryGieNew: : Infer config file: config_infer_primary_yoloV3.txt 0:00:00.020572790 24856 0x560f6776ea00 INFO DSL DslServices.cpp:2543:PrimaryGieNew: : Model engine file: model_b1_int8.engine 0:00:00.020618009 24856 0x560f6776ea00 INFO DSL DslGieBintr.cpp:53:GieBintr: : Creating GIE 'gie-primary-gie' with unique Id = 1557520627 0:00:00.052669228 24856 0x560f6776ea00 INFO DSL DslServices.cpp:2561:PrimaryGieNew: : New Primary GIE 'primary-gie' created successfully 0:00:00.053579795 24856 0x560f6776ea00 INFO DSL DslServices.cpp:3365:TilerNew: : New Tiler 'tiler' created successfully 0:00:00.054451059 24856 0x560f6776ea00 INFO DSL DslServices.cpp:3772:OsdNew: : New OSD 'on-screen-display' created successfully 0:00:00.054706612 24856 0x560f6776ea00 INFO DSL DslServices.cpp:4146:SinkFakeNew: : New Fake Sink 'rtsp-sink' created successfully 0:00:00.054829557 24856 0x560f6776ea00 INFO DSL DslServices.cpp:4984:PipelineNew: : New PIPELINE 'pipeline' created successfully 0:00:00.055413589 24856 0x560f6776ea00 INFO DSL DslPipelineSourcesBintr.cpp:268:SetStreamMuxDimensions: : Setting StreamMux dimensions: width = 1920, height = 1920 0:00:00.055572633 24856 0x560f6776ea00 INFO DSL DslServices.cpp:5073:PipelineComponentAdd: : Component 'rtsp-source' was added to Pipeline 'pipeline' successfully 0:00:00.055640005 24856 0x560f6776ea00 INFO DSL DslServices.cpp:5073:PipelineComponentAdd: : Component 'primary-gie' was added to Pipeline 'pipeline' successfully 0:00:00.055696301 24856 0x560f6776ea00 INFO DSL DslServices.cpp:5073:PipelineComponentAdd: : Component 'tiler' was added to Pipeline 'pipeline' successfully 0:00:00.055757148 24856 0x560f6776ea00 INFO DSL DslServices.cpp:5073:PipelineComponentAdd: : Component 'on-screen-display' was added to Pipeline 'pipeline' successfully 0:00:00.056011528 24856 0x560f6776ea00 INFO DSL DslServices.cpp:5073:PipelineComponentAdd: : Component 'rtsp-sink' was added to Pipeline 'pipeline' successfully 0:00:00.056044608 24856 0x560f6776ea00 INFO DSL DslPipelineSourcesBintr.cpp:246:SetStreamMuxBatchProperties: : Setting StreamMux batch properties: batch-size = 1, batch-timeout = 4000000 0:00:00.056136624 24856 0x560f6776ea00 INFO DSL DslSourceBintr.cpp:109:LinkToSink: : Linking Source 'rtsp-source' to Pad 'sink_0' for StreamMux 'stream_muxer' 0:00:00.056249128 24856 0x560f6776ea00 INFO DSL DslPipelineBintr.cpp:273:LinkAll: : Pipeline 'pipeline' Linked up all Source 'sources-bin' successfully 0:00:00.056274487 24856 0x560f6776ea00 INFO DSL DslBintr.h:208:SetBatchSize: : Setting batch size to '1' for Bintr 'primary-gie' 0:00:00.056575254 24856 0x560f6776ea00 INFO DSL DslBranchBintr.cpp:280:LinkAll: : Branch 'pipeline' Linked up Primary GIE 'primary-gie' successfully 0:00:00.056588585 24856 0x560f6776ea00 INFO DSL DslBintr.h:208:SetBatchSize: : Setting batch size to '1' for Bintr 'tiler' 0:00:00.056775312 24856 0x560f6776ea00 INFO DSL DslBranchBintr.cpp:354:LinkAll: : Branch 'pipeline' Linked up Tiler 'tiler' successfully 0:00:00.056787560 24856 0x560f6776ea00 INFO DSL DslBintr.h:208:SetBatchSize: : Setting batch size to '1' for Bintr 'on-screen-display' 0:00:00.057145639 24856 0x560f6776ea00 INFO DSL DslBranchBintr.cpp:369:LinkAll: : Branch 'pipeline' Linked up OSD 'on-screen-display' successfully 0:00:00.057162092 24856 0x560f6776ea00 INFO DSL DslBintr.h:208:SetBatchSize: : Setting batch size to '1' for Bintr 'rtsp-sink' 0:00:00.057172355 24856 0x560f6776ea00 INFO DSL DslBintr.h:208:SetBatchSize: : Setting batch size to '1' for Bintr 'sinks-bin' 0:00:00.057212986 24856 0x560f6776ea00 INFO DSL DslSinkBintr.cpp:101:LinkToSource: : Linking Sink 'rtsp-sink' to Pad 'src_0' for Tee 'sink_bin_tee' 0:00:00.057226316 24856 0x560f6776ea00 INFO DSL DslElementr.h:168:IsFactoryName: : commparing expected factory'nvstreamdemux' with actual 'tee' for element 'sink_bin_tee' 0:00:00.057515446 24856 0x560f6776ea00 INFO DSL DslBranchBintr.cpp:414:LinkAll: : Branch 'pipeline' Linked up all Sinks 'sinks-bin' successfully 0:00:00.057530432 24856 0x560f6776ea00 INFO DSL DslBintr.h:227:SetState: : Changing state to 'PLAYING' for Bintr 'pipeline' Creating LL OSD context new Deserialize yoloLayerV3 plugin: yolo_83 Deserialize yoloLayerV3 plugin: yolo_95 Deserialize yoloLayerV3 plugin: yolo_107 0:00:04.334478643 24856 0x560f6776ea00 INFO DSL DslBintr.h:241:SetState: : State change will complete asynchronously for Bintr 'pipeline' 0:00:04.454866032 24856 0x7fd78c003f70 INFO DSL DslSourceBintr.cpp:914:HandleSourceElementOnPadAdded: : Caps structs name application/x-rtp 0:00:04.454946449 24856 0x7fd78c003f70 INFO DSL DslSourceBintr.cpp:929:HandleSourceElementOnPadAdded: : Video decode linked for URI source 'rtsp-source' (DSL_test:24856): GLib-GObject-WARNING **: 21:49:04.776: g_object_set_is_valid_property: object class 'nvv4l2decoder' has no property named 'enable-max-performance' (DSL_test:24856): GLib-GObject-WARNING **: 21:49:04.776: g_object_set_is_valid_property: object class 'nvv4l2decoder' has no property named 'bufapi-version' 0:00:04.526482221 24856 0x7fd778003770 INFO DSL DslSourceBintr.cpp:914:HandleSourceElementOnPadAdded: : Caps structs name application/x-rtp 0:00:04.526536568 24856 0x7fd778003770 ERROR DSL DslSourceBintr.cpp:925:HandleSourceElementOnPadAdded: : Failed to link de-payload to pipeline terminate called without an active exception Aborted (core dumped)< Any help would be greatly appreciated! <issue_comment>username_1: Hi @username_0, The RTSP source in v0.05 has some limitations, a fixed `rtph264depay` for one, I'm in the process of adding the stream-selection support in v0.07. Can you tell me a little about your camera settings and which protocol value you are using when you create the Source? Also, do you play to upgrade to DS 5.0 ? Unfortunately, I'm not able to support any fixes/updates to pre-5.0 releases <issue_comment>username_1: After you moved to DS 5.0, which version of DSL are you now using ... still v0.05.alpha? or are you using a later version? DS.5.0 will only work with v0.06.alpha or latter (0.07 is still in progress) Sorry, for the basic questions, but it looks like you are using C/C++ , correct? The source startup looks correct in the above... but, the primary-gie is not happy with the stream format. I see you are using a Yolo detector, something I have not had a change to verify yet, but I'll make it a priority now. <issue_comment>username_0: Is that the problem? <issue_comment>username_1: Just FYI, as I'm sure you've noticed, all the current examples are in Python.... one of the best files to see working C++ code is ```/test/api/DslPipelinePlayComponentsTest.cpp``` The path looks good... the following shows that the Source has come up ```` 0:00:17.522395616 9272 0x7f00013050 INFO DSL DslSourceBintr.cpp:730:HandleSourceElementOnPadAdded: : Caps structs name video/x-raw 0:00:17.523840831 9272 0x7f00013050 INFO DSL DslSourceBintr.cpp:750:HandleSourceElementOnPadAdded: : Video decode linked for URI source 'uri-source' ``` Everything looks good right up to ``` GST_MESSAGE_TAG 0:00:18.671663301 9272 0x5575a972d0 WARN nvinfer gstnvinfer.cpp:1946:gst_nvinfer_output_loop: error: Internal data stream error. 0:00:18.671905221 9272 0x5575a972d0 WARN nvinfer gstnvinfer.cpp:1946:gst_nvinfer_output_loop: error: streaming stopped, reason not-linked (-1) 0:00:18.672463973 9272 0x557562d8a0 ERROR DSL DslPipelineBintr.cpp:751:HandleErrorMessage: : Error message 'Internal data stream error.' received from 'gie-primary-gie' ``` Can you setup the following and we can take a look the Pipeline graph? https://github.com/canammex-tech/deepstream-services-library/blob/master/docs/debugging-dsl.md#creating-pipeline-graphs <issue_comment>username_1: Also, you can try the Pipeline without the GIE Just Source, Tiler, OSD and Sinks <issue_comment>username_0: @username_1, Yes, without GIE, no error. <issue_comment>username_1: I'll try and look at the Yolo examples tomorrow.. getting late here for me <issue_comment>username_0: @username_1, ok, thank you. But this is another detector, same error: ``` 0:00:00.065954040 18682 0x55a0b13aa0 INFO DSL DslServices.cpp:280:GetServices: : Services Initialization 0:00:00.067225409 18682 0x55a0b13aa0 INFO DSL DslSourceBintr.cpp:389:DecodeSourceBintr: : URI Path = file:/home/streams/sample_1080p_h264.mp4 0:00:00.076384711 18682 0x55a0b13aa0 INFO DSL DslServices.cpp:2163:SourceUriNew: : New URI Source 'uri-source' created successfully 0:00:00.076520744 18682 0x55a0b13aa0 INFO DSL DslServices.cpp:2533:PrimaryGieNew: : Infer config file: config_infer_primary_nano.txt 0:00:00.076607721 18682 0x55a0b13aa0 INFO DSL DslServices.cpp:2545:PrimaryGieNew: : Model engine file: resnet10.caffemodel_b1_gpu0_fp16.engine 0:00:00.076874187 18682 0x55a0b13aa0 INFO DSL DslGieBintr.cpp:53:GieBintr: : Creating GIE 'gie-primary-gie' with unique Id = 1557520627 0:00:00.200893086 18682 0x55a0b13aa0 INFO DSL DslServices.cpp:2564:PrimaryGieNew: : New Primary GIE 'primary-gie' created successfully 0:00:00.204802236 18682 0x55a0b13aa0 INFO DSL DslServices.cpp:3368:TilerNew: : New Tiler 'tiler' created successfully 0:00:00.210326983 18682 0x55a0b13aa0 INFO DSL DslServices.cpp:3775:OsdNew: : New OSD 'on-screen-display' created successfully 0:00:00.211249902 18682 0x55a0b13aa0 INFO DSL DslServices.cpp:4149:SinkFakeNew: : New Fake Sink 'rtsp-sink' created successfully 0:00:00.211695153 18682 0x55a0b13aa0 INFO DSL DslServices.cpp:4987:PipelineNew: : New PIPELINE 'pipeline' created successfully 0:00:00.215195660 18682 0x55a0b13aa0 INFO DSL DslPipelineSourcesBintr.cpp:268:SetStreamMuxDimensions: : Setting StreamMux dimensions: width = 1920, height = 1080 0:00:00.215852689 18682 0x55a0b13aa0 INFO DSL DslServices.cpp:5076:PipelineComponentAdd: : Component 'uri-source' was added to Pipeline 'pipeline' successfully 0:00:00.216183059 18682 0x55a0b13aa0 INFO DSL DslServices.cpp:5076:PipelineComponentAdd: : Component 'primary-gie' was added to Pipeline 'pipeline' successfully 0:00:00.216424053 18682 0x55a0b13aa0 INFO DSL DslServices.cpp:5076:PipelineComponentAdd: : Component 'tiler' was added to Pipeline 'pipeline' successfully 0:00:00.216652535 18682 0x55a0b13aa0 INFO DSL DslServices.cpp:5076:PipelineComponentAdd: : Component 'on-screen-display' was added to Pipeline 'pipeline' successfully 0:00:00.217712735 18682 0x55a0b13aa0 INFO DSL DslServices.cpp:5076:PipelineComponentAdd: : Component 'rtsp-sink' was added to Pipeline 'pipeline' successfully 0:00:00.217829376 18682 0x55a0b13aa0 INFO DSL DslPipelineSourcesBintr.cpp:246:SetStreamMuxBatchProperties: : Setting StreamMux batch properties: batch-size = 1, batch-timeout = 4000000 0:00:00.219021705 18682 0x55a0b13aa0 INFO DSL DslSourceBintr.cpp:109:LinkToSink: : Linking Source 'uri-source' to Pad 'sink_0' for StreamMux 'stream_muxer' 0:00:00.219869135 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:273:LinkAll: : Pipeline 'pipeline' Linked up all Source 'sources-bin' successfully 0:00:00.220014801 18682 0x55a0b13aa0 INFO DSL DslBintr.h:208:SetBatchSize: : Setting batch size to '1' for Bintr 'primary-gie' 0:00:00.221501820 18682 0x55a0b13aa0 INFO DSL DslBranchBintr.cpp:280:LinkAll: : Branch 'pipeline' Linked up Primary GIE 'primary-gie' successfully 0:00:00.221570652 18682 0x55a0b13aa0 INFO DSL DslBintr.h:208:SetBatchSize: : Setting batch size to '1' for Bintr 'tiler' 0:00:00.222847302 18682 0x55a0b13aa0 INFO DSL DslBranchBintr.cpp:354:LinkAll: : Branch 'pipeline' Linked up Tiler 'tiler' successfully 0:00:00.222970823 18682 0x55a0b13aa0 INFO DSL DslBintr.h:208:SetBatchSize: : Setting batch size to '1' for Bintr 'on-screen-display' 0:00:00.225138840 18682 0x55a0b13aa0 INFO DSL DslBranchBintr.cpp:369:LinkAll: : Branch 'pipeline' Linked up OSD 'on-screen-display' successfully 0:00:00.225231384 18682 0x55a0b13aa0 INFO DSL DslBintr.h:208:SetBatchSize: : Setting batch size to '1' for Bintr 'rtsp-sink' 0:00:00.225319801 18682 0x55a0b13aa0 INFO DSL DslBintr.h:208:SetBatchSize: : Setting batch size to '1' for Bintr 'sinks-bin' 0:00:00.225538811 18682 0x55a0b13aa0 INFO DSL DslSinkBintr.cpp:101:LinkToSource: : Linking Sink 'rtsp-sink' to Pad 'src_0' for Tee 'sink_bin_tee' 0:00:00.225591675 18682 0x55a0b13aa0 INFO DSL DslElementr.h:168:IsFactoryName: : commparing expected factory'nvstreamdemux' with actual 'tee' for element 'sink_bin_tee' 0:00:00.227398153 18682 0x55a0b13aa0 INFO DSL DslBranchBintr.cpp:414:LinkAll: : Branch 'pipeline' Linked up all Sinks 'sinks-bin' successfully 0:00:00.227484010 18682 0x55a0b13aa0 INFO DSL DslBintr.h:227:SetState: : Changing state to 'PAUSED' for Bintr 'pipeline' 0:00:03.478251542 18682 0x55a0b13aa0 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<gie-primary-gie> NvDsInferContext[UID 1557520627]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1577> [UID = 1557520627]: deserialized trt engine from :resnet10.caffemodel_b1_gpu0_fp16.engine INFO: [Implicit Engine Info]: layers num: 3 0 INPUT kFLOAT input_1 3x272x480 1 OUTPUT kFLOAT conv2d_bbox 16x17x30 2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x17x30 0:00:03.478493879 18682 0x55a0b13aa0 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<gie-primary-gie> NvDsInferContext[UID 1557520627]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1681> [UID = 1557520627]: Use deserialized engine model: resnet10.caffemodel_b1_gpu0_fp16.engine 0:00:03.483040410 18682 0x55a0b13aa0 INFO nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<gie-primary-gie> [UID 1557520627]: Load new model:config_infer_primary_nano.txt sucessfully 0:00:03.487133561 18682 0x55a0b13aa0 INFO DSL DslBintr.h:241:SetState: : State change will complete asynchronously for Bintr 'pipeline' Opening in BLOCKING MODE NvMMLiteOpen : Block : BlockType = 261 NVMEDIA: Reading vendor.tegra.display-size : status: 6 NvMMLiteBlockCreate : Block : BlockType = 261 0:00:03.828680358 18682 0x7f08012850 INFO DSL DslSourceBintr.cpp:730:HandleSourceElementOnPadAdded: : Caps structs name video/x-raw 0:00:03.830454708 18682 0x7f08012850 INFO DSL DslSourceBintr.cpp:750:HandleSourceElementOnPadAdded: : Video decode linked for URI source 'uri-source' 0:00:03.830770390 18682 0x7f08012850 INFO DSL DslSourceBintr.cpp:730:HandleSourceElementOnPadAdded: : Caps structs name audio/x-raw 0:00:03.831257050 18682 0x55a0b13aa0 INFO DSL DslBintr.h:253:SetState: : State change completed asynchronously for Bintr'pipeline' 0:00:03.831425915 18682 0x55a0b13aa0 INFO DSL DslBintr.h:227:SetState: : Changing state to 'PLAYING' for Bintr 'pipeline' 0:00:03.833914830 18682 0x55a0b13aa0 INFO DSL DslBintr.h:233:SetState: : State change completed synchronously for Bintr'pipeline' 0:00:03.834047055 18682 0x55a0b13aa0 INFO DSL DslServices.h:520:GetMainLoopHandle: : Returning Handle to MainLoop 0:00:03.844038427 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:594:HandleStateChanged: : GST_STATE_NULL => GST_STATE_READY 0:00:03.844212061 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_STREAM_STATUS 0:00:03.844407166 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_STREAM_STATUS 0:00:03.844589919 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_STREAM_STATUS 0:00:03.844678144 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_STREAM_STATUS 0:00:03.844833153 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_STREAM_STATUS 0:00:03.845228772 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_STREAM_STATUS 0:00:03.845766632 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_STREAM_STATUS 0:00:03.848169307 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_STREAM_STATUS 0:00:03.848399773 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_STREAM_STATUS 0:00:03.848636670 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_STREAM_STATUS 0:00:03.848718911 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_STREAM_STATUS 0:00:03.848827040 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_STREAM_STATUS 0:00:03.848960353 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_ELEMENT 0:00:03.849032897 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_STREAM_STATUS 0:00:03.849104738 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_STREAM_STATUS 0:00:03.849240387 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_ELEMENT 0:00:03.849311203 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_STREAM_STATUS 0:00:03.849468997 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_STREAM_STATUS 0:00:03.849592774 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_STREAM_STATUS 0:00:03.849698118 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_STREAM_STATUS 0:00:03.849770055 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_STREAM_STATUS 0:00:03.849843304 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_STREAM_STATUS [Truncated] 0:00:04.080438181 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_TAG 0:00:04.102660847 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_TAG 0:00:04.113052254 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_TAG 0:00:04.243307295 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_TAG 0:00:04.256901799 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_TAG 0:00:04.272847168 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_TAG 0:00:04.296462548 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_TAG 0:00:04.314534014 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_TAG 0:00:04.575654501 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_TAG 0:00:05.027388920 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_TAG 0:00:06.027429883 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_TAG 0:00:06.066068065 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_TAG 0:00:06.085656502 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_TAG 0:00:06.144712376 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_TAG 0:00:10.125894189 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_TAG 0:00:10.282781717 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_TAG 0:00:10.285590314 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_STREAM_STATUS 0:00:10.286341200 18682 0x55a0b13aa0 ERROR DSL DslPipelineBintr.cpp:751:HandleErrorMessage: : Error message 'Internal data stream error.' received from 'src-queue' 0:00:10.286527026 18682 0x55a0b13aa0 INFO DSL DslPipelineBintr.cpp:526:HandleBusWatchMessage: : Message type:: GST_MESSAGE_ELEMENT ``` <issue_comment>username_0: ![pipeline](https://user-images.githubusercontent.com/20349084/88367218-d25c5180-cdbd-11ea-9da3-bae3f2d01848.png) <issue_comment>username_0: @username_1 I seem to find the problem. When I do not use the fake sink, no error occurred. I tried RTSP sink and window sink, it worked well. Does it seem something wrong with fake sink? <issue_comment>username_0: fake sink does not linked. ![pipeline](https://user-images.githubusercontent.com/20349084/88381252-abf7df80-cdd8-11ea-8b97-cb6f1cda07ca.png) <issue_comment>username_0: fake sink does not linked. ![pipeline](https://user-images.githubusercontent.com/20349084/88381771-9931da80-cdd9-11ea-809b-cd8175c9235e.jpg) <issue_comment>username_1: @username_0 I've opened... Fake Sink element not linking to Sink Queue in Bintr #285 ... for this issue Thanks for the debugging. <issue_comment>username_1: @username_0 the sink issue has been resolved in the tip of the [v0.07.alpha](https://github.com/canammex-tech/deepstream-services-library/tree/v0.07.alpha) branch <issue_comment>username_0: @username_1, thank you. I will try. <issue_comment>username_1: @username_0 I should have mentioned that the RTSP Source in the v0.07.alpha has been updated, if you could let me know if it works for you now? that would be great. <issue_comment>username_0: @username_1, thank you, it works now. <issue_comment>username_1: @username_0 can we close this issue now?<issue_closed>
{'fraction_non_alphanumeric': 0.07327534850470631, 'fraction_numerical': 0.16680567139282734, 'mean_word_length': 3.493219129193433, 'pattern_counts': {'":': 0, '<': 27, '<?xml version=': 0, '>': 27, 'https://': 5, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '28498227', 'n_tokens_mistral': 11172, 'n_tokens_neox': 8754, 'n_words': 1989}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Clarification of implicit conversion requirements username_0: Hey folks, Sorry for the bug a day pace, you just made too useful a product. This bug is the one I'm iffiest about filing, if nothing immediately pops out at you let me know and I'll try to get you a reproducer, it's 100% possible this is on our side. I'm reading [this](https://github.com/NVIDIA/jitify/blob/master/jitify.hpp#L1826), which mentions implicit conversions and variadic packs likely resulting in a segfault. I'm seeing such a segfault, though my stack trace goes through [this version of launch](https://github.com/NVIDIA/jitify/blob/master/jitify.hpp#L1858), which I believe it's supposed to I can generate one of two functions ``` header __global__ void jitify_example_cu112_0(int specialization,int * d_array,int debug_do_not_merge=0){ int i = threadIdx.x + blockDim.x*blockIdx.x; if(i<1024){ for(int k = 0; k < 8; k++){ d_array[i] += specialization * specialization * specialization; } } } ``` Or I can elide specialization as a constant ``` header __global__ void jitify_example_cu112_0(int * d_array,int debug_do_not_merge=0){ int i = threadIdx.x + blockDim.x*blockIdx.x; if(i<1024){ for(int k = 0; k < 8; k++){ d_array[i] += 512; } } } ``` The launch happens a little like ``` static jitify::KernelLauncher* launcher; template<class... Args> void launch_assist(Args... args){ launcher->launch(args...); } ``` From that, I successfully create a program, instantiation,...,launcher. I'm invoking the launcher with an "invoke" pattern, I'm packing my args as a tuple and then calling camp::invoke (a lot like std::apply, but more nvcc friendly). The launch happens The first option, in which we have two real arguments being passed, crashes. The second succeeds. An equivalent operation piped through Cling seems to work. Any experience with expanding variadic packs causing you grief? Again, if nothing jumps out, I'll drill down myself a bit. Thanks! <issue_comment>username_0: Oh, and I doubt it matters, but I do have #11 merged<issue_closed> <issue_comment>username_0: Disregard this issue, getting rid of the default argument solved it. Bizarre, that might be something you care about later, but this is an avoidable issue. To clarify, moving from ``` header __global__ void jitify_example_cu112_0(int specialization,int * d_array,int debug_do_not_merge=0){ int i = threadIdx.x + blockDim.x*blockIdx.x; if(i<1024){ for(int k = 0; k < 8; k++){ d_array[i] += specialization * specialization * specialization; } } } ``` to ``` header __global__ void jitify_example_cu112_0(int specialization,int * d_array){ int i = threadIdx.x + blockDim.x*blockIdx.x; if(i<1024){ for(int k = 0; k < 8; k++){ d_array[i] += specialization * specialization * specialization; } } } ``` Solved the bug <issue_comment>username_1: Interesting, I hadn't run into this before. Because kernels are launched via function pointers, I don't believe it's possible to support default arguments. (Unfortunately there's also no easy way to sanity-check the number of arguments). <issue_comment>username_0: That's a little disconcerting, I made kernels with default arguments that were working if they had one non default argument, but maybe that's just luck. Oh well, works now.
{'fraction_non_alphanumeric': 0.09435061153174142, 'fraction_numerical': 0.018637157833430402, 'mean_word_length': 3.985486211901306, 'pattern_counts': {'":': 0, '<': 16, '<?xml version=': 0, '>': 9, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '9064364', 'n_tokens_mistral': 1138, 'n_tokens_neox': 1037, 'n_words': 406}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Won't compile on Ubuntu 14.04, ghc-7.8.3 username_0: cabal: Could not resolve dependencies: trying: funblog-0.1.1.0 (user goal) trying: base-4.7.0.1/installed-e4b... (dependency of funblog-0.1.1.0) next goal: HSmarty (dependency of funblog-0.1.1.0) rejecting: HSmarty-0.2.0.1, 0.2.0.0, 0.1.1.0, 0.1.0.0 (conflict: base==4.7.0.1/installed-e4b..., HSmarty => base==4.6.*) Dependency tree exhaustively searched.<issue_closed> <issue_comment>username_1: It should compile now
{'fraction_non_alphanumeric': 0.171875, 'fraction_numerical': 0.095703125, 'mean_word_length': 4.029411764705882, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 9, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '2146557', 'n_tokens_mistral': 229, 'n_tokens_neox': 216, 'n_words': 46}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Fix build for IE11 username_0: <issue_comment>username_0: Fixed https://github.com/OpusCapita/filemanager/pull/51 **Note:** sign-in to Google Drive on IE11 or Edge not allowed by sequrity reasons. Sign-in in react-client hosted on any trusted domain (for example - "xxx.azurewebsites.net") works OK. Some workaround should be found later.<issue_closed>
{'fraction_non_alphanumeric': 0.09273182957393483, 'fraction_numerical': 0.020050125313283207, 'mean_word_length': 4.633802816901408, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '24219674', 'n_tokens_mistral': 132, 'n_tokens_neox': 121, 'n_words': 42}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Better handling of Clojure controller errors username_0: This falls out of my work on fw1-clj in that the current code can't tell the difference between a Clojure controller that won't `require` / `resolve` due to syntax errors vs one that is missing. <issue_comment>username_0: The error handling was more robust than I realized but while trying to trigger various error conditions, I came across a couple of minor bugs around how Clojure controllers are created and some performance enhancements as well. I also enabled `before()` / `after()` methods on Clojure controllers. Now I have to check the documentation to see if I mention those _don't_ work. <issue_comment>username_0: And the answer is "no" so based on those working with pure CFML FW/1 and pure Clojure FW/1, folks should expect them to work with mixed CFML / Clojure FW/1 and therefore this is just a bug fix and doesn't need documenting beyond that.<issue_closed>
{'fraction_non_alphanumeric': 0.04766839378238342, 'fraction_numerical': 0.007253886010362694, 'mean_word_length': 4.92638036809816, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '4644773', 'n_tokens_mistral': 255, 'n_tokens_neox': 235, 'n_words': 148}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: List text is blur on animation username_0: Hello, I found that the list text is blurred on using `<!-- .element: class="fragment" -->` in safari 14.0. Text looks fine in chrome. I noticed that 'will-change' css has something to do with it. With css: `will-change:opacity;` <img width="131" alt="image" src="https://user-images.githubusercontent.com/858059/94172278-ecff8500-fe57-11ea-8563-f0caac480881.png"> With css: `will-change:initial;` <img width="157" alt="image" src="https://user-images.githubusercontent.com/858059/94172209-d3f6d400-fe57-11ea-8db0-72aeae7f2c67.png"> Has anyone else faced it? Is it important to use `will-change:opacity;`? <issue_comment>username_1: Hi, I have exactly the issue: - Safari 14.0/macOS Mojave : text is blurred in list - Firefox 82.0.2/macOS Mojave: text is normal in list - Chrome 86.0.4240.111/macOS Mojave: text is normal in list Change style to `will-change:initial` fixes it (besides sides effects)
{'fraction_non_alphanumeric': 0.1226321036889332, 'fraction_numerical': 0.09172482552342971, 'mean_word_length': 4.340425531914893, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '27384322', 'n_tokens_mistral': 409, 'n_tokens_neox': 350, 'n_words': 105}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Stack Overflow reputation badge shows "invalid parameters" username_0: Experiencing an issue with... - [X] [shields.io](https://shields.io/#/) - [ ] My own instance - [ ] [badge-maker NPM package](https://www.npmjs.com/package/badge-maker) :beetle: **Description** It was working fine yesterday, but the Stack Overflow badge now just shows "invalid parameters." On the shields.io generator, all the presets seem to be invalid too. ![Screen Shot 2021-06-24 at 3 18 53 PM](https://user-images.githubusercontent.com/49819455/123339477-85b65480-d4ff-11eb-97a2-50f66d805401.png) :link: **Link to the badge** ``` <img src="https://img.shields.io/stackexchange/stackoverflow/r/14351818?color=F47F24&label=Stack%20Overflow"> ``` Live URL result | Screenshot --- | --- <img src="https://img.shields.io/stackexchange/stackoverflow/r/14351818?color=F47F24&label=Stack%20Overflow"> | ![Screen Shot 2021-06-24 at 3 18 18 PM](https://user-images.githubusercontent.com/49819455/123339433-6d463a00-d4ff-11eb-8c4f-b27b3ad3573a.png) <issue_comment>username_1: Thanks for reaching out but going to close as a duplicate of #5414, perhaps stemming from #3591<issue_closed>
{'fraction_non_alphanumeric': 0.13711001642036125, 'fraction_numerical': 0.11412151067323481, 'mean_word_length': 4.591743119266055, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 6, 'lorem ipsum': 0, 'www.': 1, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '8409557', 'n_tokens_mistral': 534, 'n_tokens_neox': 426, 'n_words': 97}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Suggestion: NR. 11: you do not need to write `return 0;` at the end of the `main` function, but you may username_0: # NR. 11: you do not need to write `return 0;` at the end of the `main` function, but you may ## Reason You do not need to write `return 0;` at the end of the `main` function, because the Standard adds this for you (see references 1 and 2). But if you like, you can keep it Earlier `return`s are just fine. ## Example ``` int main() { return 0; //Unnecessary, but OK } ``` ``` int main() {} //OK ``` ``` int main() { // if (is_already_done) return 0; //OK: Earlier return // // Possibly a 'return 0;' here } ``` ## Exceptions None ## Enforcement None ## See alsos None ## Notes One can also return EXIT_SUCCESS and EXIT_FAILURE, which are macros within the STL. This non-rule originated from #182 and was declared a non-rule by Herb. ## Discussion See references ## References * [1] C++. International Standard. ISO/IEC 14882. Second edition. Paragraph 3.6.1.5 * [2] Working Draft, Standard for Programming Language C++. International Standard. ISO/IEC document number N3936. 2014-08-22. Paragraph 3.6.1.5 <issue_comment>username_1: There's only one `main` per program and this one line does no real harm. It doesn't seem worth having a rule against it.<issue_closed>
{'fraction_non_alphanumeric': 0.10660980810234541, 'fraction_numerical': 0.03127221037668799, 'mean_word_length': 3.011396011396011, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 2, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '1081117', 'n_tokens_mistral': 525, 'n_tokens_neox': 435, 'n_words': 197}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Added Ukrainian translation username_0: Hi! I'm Andrii and I've translated TryRuby to my native language. I tried to follow your guidelines for translation and now I'd like to add it to the master branch. The name of the language in the namepicker should be "Π£ΠΊΡ€Π°Ρ—Π½ΡΡŒΠΊΠ°" as in "Русский" or "English". Cheers! <issue_comment>username_0: I don't know how long does it usually take to add a translation, but I've done everything right, and nothing's happend yet. Weird... <issue_comment>username_1: Hi Andrii, again my apologies that you translation was not picked up sooner. I think your translation looks great (at least google translate does a very decent translation back into English). To make the new language available it should be added to source/index.html.markdown. And you may want to add it to the `get_language` method in source/javascripts/try_ruby.js.rb, so a webbrowser automatically selects the Ukranian version. <issue_comment>username_0: Thank you sir, for checking my translation. I just did what you recommended, so I hope you'll add it this time. Cheers! <issue_comment>username_0: Thank you for everything. Now, sorry to bother you, but I just looked at the translation with a fresh eye and noticed some minor errors (last time I saw the code in February), so I've corrected them and started a new pull request. If you have time https://github.com/ruby/TryRuby/pull/87
{'fraction_non_alphanumeric': 0.056179775280898875, 'fraction_numerical': 0.0049157303370786515, 'mean_word_length': 4.864197530864198, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '22430643', 'n_tokens_mistral': 386, 'n_tokens_neox': 366, 'n_words': 213}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Capture parents and children for each area username_0: Each area-page has knowledge of their parents (i.e. breadcrumbs) and children (e.g. the list of all sub-areas in the links side-menu). Capture this information using the ID of each area. <issue_comment>username_0: Fixed by commit https://github.com/username_0/Mountain-Project-Scraper/commit/331f0f114c8a7943997e0ec20e24ab8a92341ec4<issue_closed>
{'fraction_non_alphanumeric': 0.0779816513761468, 'fraction_numerical': 0.06880733944954129, 'mean_word_length': 6.048387096774194, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '29478874', 'n_tokens_mistral': 153, 'n_tokens_neox': 140, 'n_words': 44}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Split the dev/prod OS download version options, similarly to the UI username_0: Connects to https://github.com/resin-io/hq/issues/909#issuecomment-307672118. If you are asked to pick an OS version interactively, we should ask dev/prod first, and then versions within that, rather than having a huge list that you have to interpret. Passing versions to the CLI directly should not change.
{'fraction_non_alphanumeric': 0.0611764705882353, 'fraction_numerical': 0.03058823529411765, 'mean_word_length': 4.835616438356165, 'pattern_counts': {'":': 0, '<': 2, '<?xml version=': 0, '>': 2, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 1, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '14712109', 'n_tokens_mistral': 121, 'n_tokens_neox': 109, 'n_words': 56}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Safety check that html_document is the first output specified username_0: If a user renders a workflowr analysis file to PDF and edits the YAML header to list `pdf_document()` prior to `html_document()` (this is also done automatically by the RStudio Knit button), then `wflow_build()` gives the terse error: ``` Error: unused arguments (lib_dir = "site_libs", self_contained = FALSE) ``` This is because `rmarkdown::render_site()` passes some extra arguments assuming it is passing them to `html_document()`, but `pdf_document()` does not accept the arguments `lib_dir` and `self_contained`, and thus throws an error. I can have workflowr check the order in the YAML header prior to attempt building. If the ordering is wrong, I can provide a more informative message about how to fix the problem. Something like the following: ``` yaml_header <- rmarkdown::yaml_front_matter(rmd) if (names(yaml_header$output) != "html_document") { stop("Explanation of how to edit YAML header") } ``` <issue_comment>username_0: This is no longer an issue. Using both `wflow_build()` and the RStudio Knit button, I confirmed that: 1. Listed `pdf_document()` first does not cause an error 1. Specifying custom options to `wflow_html` in the YAML header of the Rmd file overrides the options in `_site.yml` (i.e. they aren't being ignored) ``` output: pdf_document: default workflowr::wflow_html: theme: cerulean ``` I suspect this is because of the custom site generator `wflow_site()` defined in `analysis/index.Rmd`. The advice in the FAQ is still valid. Use `rmarkdown::render()` manually to create a PDF: ``` library("rmarkdown") render("analysis/file.Rmd", pdf_document()) ``` I did observe some strange behavior of RStudio. When both of the output formats list the default settings, then `Knit to PDF` works fine. ``` output: pdf_document: default workflowr::wflow_html: default ``` But if `wflow_html` is passed any options, like the first chunk in this message, then "Knit to PDF" still uses `wflow_html`.<issue_closed>
{'fraction_non_alphanumeric': 0.0893194706994329, 'fraction_numerical': 0.001890359168241966, 'mean_word_length': 4.6005291005291005, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '28880944', 'n_tokens_mistral': 655, 'n_tokens_neox': 600, 'n_words': 273}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: How to get Emmet to generate a custom JSX attribute without quotes username_0: By default, when I expand an html tag with an attribute, the attribute's value gets surrounded by quotes. I'm trying to remove the quotes generated by Emmet around the props.onClick value for custom attribute onClick. My input (then TAB to expand): `button[onClick={props.onClick}]` Emmet's output: `<button onClick="props.onClick"></button>` Whereas, when I'm using React, I want to have to specify when I want the expanded version to have quotes. What I expect (props... WITHOUT quotes): `<button onClick={props.onClick}></button>` Wrapping it around double brackets doesn't work either. Is that possible with vscode.emmet? <issue_comment>username_1: @username_2 Are you aware of any customization that allows not having the quotes around the attribute value at all? <issue_comment>username_2: Yes, you can add additional check for React attributes here: https://github.com/emmetio/markup-formatters/blob/master/format/html.js#L232 The React attribute object contains `{ before: '{', after: '}' }` options: https://github.com/emmetio/abbreviation/blob/master/test/attributes.js#L77 <issue_comment>username_3: @username_1 , I would like to pick this up. Where exactly should I start ? (submitting a PR to other repo confused me, so I just want to know) Thanks :) <issue_comment>username_0: @username_1 i have made a PR https://github.com/emmetio/markup-formatters/pull/2 @username_2 can you check this please? <issue_comment>username_1: Thanks @username_2! Just waiting for a newer version of the https://github.com/emmetio/abbreviation module, and then we should be able to pull in all the upstream fixes. <issue_comment>username_1: @username_4 All that is left to do is to update vscode-emmet-helper module to consume the latest versions of the `abbreviation`, `markup-formatter` modules <issue_comment>username_4: @username_1 OK, will pick it up for December. <issue_comment>username_1: @username_4 I have updated the `abbreviation` module in the vscode-emmet-helper repo. Updating `markup-formatter` to the latest v0.4.1 is not trivial. We are currently using v0.5.10 of the expand-abbreviation module which is not compatible with the v0.4.0 version of the markup-formatter. We have 2 choices here: - Patch the umd version of the `expand-abbreviation` module i.e. the `expand-full.js` file in vscode-emmet-helper with the fix in https://github.com/emmetio/markup-formatters/pull/2/files - Or upgrade to the latest version of the `expand-abbreviation` module as per https://github.com/Microsoft/vscode-emmet-helper/issues/28 For short-term I would recommend the first option.<issue_closed> <issue_comment>username_5: Is this going be back in one of the next sprints? <issue_comment>username_6: currently the `button.square[onClick={func}]` still converts to `<button className="square" onClick="func"></button>` not the intended one `<button className="square" onClick={func}></button>` Is there a way to get the intended behavior? Thanks <issue_comment>username_7: Is there anyone working on this? <issue_comment>username_0: Hello @username_4, this is not fixed yet, what is the current plan? <issue_comment>username_8: Any update on this? <issue_comment>username_9: Hello? <issue_comment>username_10: Bump <issue_comment>username_11: Bump <issue_comment>username_12: Bump <issue_comment>username_2: Sadly, I was unable to contact anyone at VSCode team to help me and support/sponsor new plugin development. This feature is already available in upcoming [Sublime Text plugin](https://github.com/emmetio/sublime-text-plugin) and [works in CodePen](https://blog.codepen.io/2020/06/29/an-upgrade-to-emmet/) with recent Emmet version. <issue_comment>username_13: Is this functionality available in an extension? Since there's not a whole ton of feedback from the team <issue_comment>username_2: @username_13 it’s available in core Emmet module. There’s a draft PR with upgrade to recent Emmet v2, which seems to be abandoned: https://github.com/microsoft/vscode-emmet-helper/pull/33 <issue_comment>username_14: \closedWith dc5a3da
{'fraction_non_alphanumeric': 0.08737168775364049, 'fraction_numerical': 0.01694915254237288, 'mean_word_length': 5.24441132637854, 'pattern_counts': {'":': 0, '<': 32, '<?xml version=': 0, '>': 32, 'https://': 9, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '7015876', 'n_tokens_mistral': 1272, 'n_tokens_neox': 1211, 'n_words': 484}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: chore(lock): Update package lock for new wslink username_0: <issue_comment>username_1: :tada: This PR is included in version 9.6.1 :tada: The release is available on: - [npm package (@latest dist-tag)](https://www.npmjs.com/package/vtk.js) - [GitHub release](https://github.com/Kitware/vtk-js/releases/tag/v9.6.1) Your **[semantic-release](https://github.com/semantic-release/semantic-release)** bot :package::rocket:
{'fraction_non_alphanumeric': 0.16263736263736264, 'fraction_numerical': 0.017582417582417582, 'mean_word_length': 5.514285714285714, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 3, 'lorem ipsum': 0, 'www.': 1, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '30191382', 'n_tokens_mistral': 167, 'n_tokens_neox': 156, 'n_words': 34}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Implemented a logic app which was previously just in v2 username_0: ## Purpose Implemented 2 logic apps 1. Upload --> transcode --> video indexer 2. Publish and Get Streaming URL These were previously in https://github.com/Azure-Samples/media-services-dotnet-functions-integration, but not in the v3 repo. There are also a lot bug fixes to the advanced-vod-functions when I was implementing this. ## Does this introduce a breaking change? <!-- Mark one with an "x". --> ``` [ ] Yes [x] No ``` ## Pull Request Type What kind of change does this Pull Request introduce? <!-- Please check the one that applies to this PR using "x". --> ``` [ ] Bugfix [x] Feature [ ] Code style update (formatting, local variables) [ ] Refactoring (no functional changes, no api changes) [ ] Documentation content changes [ ] Other... Please describe: ``` ## How to Test 1. Deploy the azure functions 2. Deploy the logic apps and configure the connectors 3. Upload an MP4 file in a storage watch container. 4. Run the logic app trigger ## What to Check Verify that the following are valid * Check if a new multi-bitrate asset is created * Check if a new indexed asset appears in your VI account * Check if a new streaming locator is created (using the 2nd logic app) ## Other Information High level explanation of the logic app here: https://username_0.com/raztype/creating-an-azure-media-services-upload-workflow-using-azure-storage-and-logic-apps/ <issue_comment>username_0: Can't sign at the moment, the URL is down <issue_comment>username_0: CLA is up, just signed.
{'fraction_non_alphanumeric': 0.08751529987760098, 'fraction_numerical': 0.009179926560587515, 'mean_word_length': 3.631728045325779, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 8, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '28657061', 'n_tokens_mistral': 489, 'n_tokens_neox': 467, 'n_words': 214}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Use latest labels for identifying IngressGateway workload username_0: Resolves https://github.com/istio/istio/issues/18773 by using labels that are valid in 1.4 beta. <issue_comment>username_1: should probably sync the operator add the same labels instead of this change? <issue_comment>username_0: /retest <issue_comment>username_0: @username_1 The operator creates these labels for the Ingress Pod: ``` labels: app: istio-ingressgateway operator.istio.io/component: IngressGateway operator.istio.io/managed: Reconcile operator.istio.io/version: 1.4.0 release: istio ``` The helm deployment of Istio 1.4 alpha I installed last week used ``` labels: app: istio-ingressgateway chart: gateways heritage: Tiller istio: ingressgateway release: istio ``` I don't object to the new labels from the operator. I was not surprised to lose `chart` and `heritage`. I was surprised to lose `istio: ingressgateway` and if you agree can you create an Issue for the operator to also include that label? I will put a hold on this PR to await your decision. <issue_comment>username_1: we should get rid of chart and heritage (like we have) but I think app, istio, and release should be consistent with the helm charts to avoid breaking things like istioctl, tests, users tooling, and to be more consistent, etc. It was not an intentional design decision as far as I know <issue_comment>username_1: If ingress doesn't have `istio: ingressgateway` it will break 99% of users Gateway objects and docs. Proper fix is in operator
{'fraction_non_alphanumeric': 0.06191950464396285, 'fraction_numerical': 0.013003095975232198, 'mean_word_length': 4.097791798107256, 'pattern_counts': {'":': 0, '<': 7, '<?xml version=': 0, '>': 7, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '28747032', 'n_tokens_mistral': 489, 'n_tokens_neox': 455, 'n_words': 211}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: [FEA] Avoid using std::stringstream for message formatting username_0: **Is your feature request related to a problem? Please describe.** cuML's logger macros use [`std::stringstream`](https://github.com/rapidsai/cuml/blob/57a6ff7ecae6a4b6e2e9db97dc664be453310111/cpp/include/cuml/common/logger.hpp#L187) for formatting logged messages. `std::stringstream` can be quite expensive, and so this is likely a source of non-negligible runtime overhead. **Describe the solution you'd like** Use the [`fmt` library](https://github.com/fmtlib/fmt) for formatting instead. Since cuML is already using `spdlog` for logging, `fmt` is already natively supported.
{'fraction_non_alphanumeric': 0.10285714285714286, 'fraction_numerical': 0.03857142857142857, 'mean_word_length': 5.4907407407407405, 'pattern_counts': {'":': 0, '<': 2, '<?xml version=': 0, '>': 2, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '10974591', 'n_tokens_mistral': 234, 'n_tokens_neox': 211, 'n_words': 69}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Consider option to output results as astropy.Tables username_0: Astropy Tables are widely used and could be a useful output format. A similar case could be made for pandas DataFrames. Currently, Astrodbkit2 uses the native SQLAlchemy output, a list of named tuples, which can be readily transformed to other formats. While this would increase our dependencies, it may be worthwhile to consider adding this in some fashion. The ideal way would be as an extra method available when querying but that may require some deeper investigation in how the `Query` class is constructed in SQLAlchemy.<issue_closed>
{'fraction_non_alphanumeric': 0.031055900621118012, 'fraction_numerical': 0.003105590062111801, 'mean_word_length': 5.084905660377358, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '13247224', 'n_tokens_mistral': 152, 'n_tokens_neox': 143, 'n_words': 95}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: authorization header is set as `bearer ....` instead of `Bearer ...` username_0: When reporting an issue, please provide the following details: - swagger-ui version: 3.0.13 - a swagger file reproducing the issue: I'm implementing OAuth2 and authorization part goes fine. However, the authorization header is set as `bearer ....` instead of `Bearer ...`, which is rejected by oauth2-server. ![auth](https://user-images.githubusercontent.com/11527341/26972216-155b9a3a-4d11-11e7-98c8-59ec4cc4cfa4.png) <issue_comment>username_1: Caused by [this issue](https://github.com/swagger-api/swagger-js/issues/1040). <issue_comment>username_2: Please follow the upstream linked issue.<issue_closed>
{'fraction_non_alphanumeric': 0.11764705882352941, 'fraction_numerical': 0.06566347469220246, 'mean_word_length': 4.7637795275590555, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '19999343', 'n_tokens_mistral': 261, 'n_tokens_neox': 229, 'n_words': 66}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: HTML tag creation not works properly username_0: Issue Type: <b>Bug</b> here i facing issue when i create a new tag then wirte a class in it, It just write that in closeing tag also i can't do this again and again to remove from bottom and start working on satrting tag. VS Code version: Code 1.41.0 (9579eda04fdb3a9bba2750f15193e5fafe16b959, 2019-12-11T18:37:42.077Z) OS version: Windows_NT x64 10.0.18362 <details> <summary>System Info</summary> |Item|Value| |---|---| |CPUs|Intel(R) Core(TM) i5-7400 CPU @ 3.00GHz (4 x 3000)| |GPU Status|2d_canvas: enabled<br>flash_3d: enabled<br>flash_stage3d: enabled<br>flash_stage3d_baseline: enabled<br>gpu_compositing: enabled<br>metal: disabled_off<br>multiple_raster_threads: enabled_on<br>oop_rasterization: disabled_off<br>protected_video_decode: enabled<br>rasterization: enabled<br>skia_renderer: disabled_off<br>surface_control: disabled_off<br>surface_synchronization: enabled_on<br>video_decode: enabled<br>viz_display_compositor: enabled_on<br>viz_hit_test_surface_layer: disabled_off<br>webgl: enabled<br>webgl2: enabled| |Load (avg)|undefined| |Memory (System)|15.91GB (5.78GB free)| |Process Argv|| |Screen Reader|no| |VM|0%| </details><details><summary>Extensions (5)</summary> Extension|Author (truncated)|Version ---|---|--- r|Iku|1.2.0 vscode-docker|ms-|0.9.0 python|ms-|2019.11.50794 LiveServer|rit|5.6.1 vscode-scss-formatter|sib|1.4.2 </details> <!-- generated by issue reporter --> <issue_comment>username_1: This new 1.41 feature is described in the release notes including information on how to turn it off. Several issues have been raised about it, and some are already resolved in the 1.42 Insiders build. Please search for 'mirror cursor' to see if any describe what you are seeing. If yes, close this as a duplicate. If not, give more information on how exactly to reproduce yours.<issue_closed> <issue_comment>username_2: @username_0 This bug should be fixed. Please open a new issue with reproducible steps.
{'fraction_non_alphanumeric': 0.10905550146056475, 'fraction_numerical': 0.055501460564751706, 'mean_word_length': 5.650485436893204, 'pattern_counts': {'":': 0, '<': 33, '<?xml version=': 0, '>': 33, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '29019345', 'n_tokens_mistral': 766, 'n_tokens_neox': 691, 'n_words': 204}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Update covidhub importer to bring trees into aspen username_0: ### Description Added support to walk the s3 bucket and copy the trees over. It also uses the datetime encoded in the filename to set the start and end times. Depends on #146 #### Issue [ch123933](https://app.clubhouse.io/genepi/stories/space/123933) ### Test plan ran importer, manually checked db. ### Description Write a description of your pull request here #### Issue [ch<fill_in_issue_number>](https://app.clubhouse.io/genepi/stories/space/<fill_in_issue_number>) ### Test plan Write how your changes are tested, or give a convincing reason why they can't be tested automatically. <issue_comment>username_1: Trusting your self-testing!
{'fraction_non_alphanumeric': 0.08972691807542263, 'fraction_numerical': 0.02340702210663199, 'mean_word_length': 4.422535211267606, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '11774044', 'n_tokens_mistral': 247, 'n_tokens_neox': 223, 'n_words': 85}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Specify Installation Directory username_0: It would be useful to be able to specify the installation/home directory. For example we're using Amazon EC2, Terraform and Ansible to deploy and provision our Octopus server. We would like to have the packages repository on an EBS volume that can be remounted when we teardown/recreate a new instance of the server. <issue_comment>username_1: @username_0, @rohmcd, @fodoj: To make sure I understand your request, is it just the home directory you want to change? ie: ``` .\Octopus.Server.exe configure --home <homepath> ``` You _dont_ mean: * config file path (eg `c:\octopus\octopus.config`) * application installation directory (eg `c:\program files\octopus deploy\octopus`) * any of the paths you can customise using the `path` command (the package repository directory, the artifacts directory or the tasklogs directory (all of these end up under the `home` directory if not set)) The PR above will allow you to customise the home directory, but none of the others. At the moment, its hard to check what the current value of the other paths are (until I make a change in the server). <issue_comment>username_0: @username_1 Yes it's the`--home` parameter I want to be able to set.<issue_closed> <issue_comment>username_1: This is now available in version `2.0.176`.
{'fraction_non_alphanumeric': 0.06901615271659324, 'fraction_numerical': 0.00881057268722467, 'mean_word_length': 4.702928870292887, 'pattern_counts': {'":': 0, '<': 7, '<?xml version=': 0, '>': 7, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '29243968', 'n_tokens_mistral': 384, 'n_tokens_neox': 366, 'n_words': 194}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: DDL -> avro username_0: Support ddl as string (e.g. from pg_dump) to avro. e.g. ``` a = """ CREATE TABLE public.abc ( a integer NOT NULL, b integer NOT NULL, created_at timestamp with time zone, c integer, hello text[], updated_at timestamp without time zone ); """ # or get from somewher avro_schema = get_avro_from_ddl(a) ```
{'fraction_non_alphanumeric': 0.10669975186104218, 'fraction_numerical': 0.0024813895781637717, 'mean_word_length': 2.8846153846153846, 'pattern_counts': {'":': 0, '<': 2, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '29162904', 'n_tokens_mistral': 145, 'n_tokens_neox': 141, 'n_words': 46}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: IAI: Added principalType to role assignment. username_0: To fix a known issue with delay of Service Principal creation, we have added `principalType` field in role assignment properties. Here are a few relevant links that point to this solution: * Stack Overflow: [Sometimes ARM template will throw PrincipalNotFound Error when Working with User-assigned Managed Identity](https://stackoverflow.com/questions/60516853/sometimes-arm-template-will-throw-principalnotfound-error-when-working-with-user) * Microsoft Docs: [Add Azure role assignments using Azure Resource Manager templates](https://docs.microsoft.com/en-us/azure/role-based-access-control/role-assignments-template#new-service-principal) <issue_comment>username_1: Do we also have other ARM templates were we need to apply this change?
{'fraction_non_alphanumeric': 0.07756563245823389, 'fraction_numerical': 0.011933174224343675, 'mean_word_length': 5.821138211382114, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '8802772', 'n_tokens_mistral': 223, 'n_tokens_neox': 211, 'n_words': 80}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Change refund granularity to monthly username_0: It was easy enough to distinguish between refunds for old items stopped in January and newly covered items stopped in January, so I implemented that. Old items stopped in January get full refund. New items stopped in January get 11/12 refund, similar to the other months. <issue_comment>username_0: Heh, heh. I'm glad you said that, because I think I made it too simplistic. It looks like the calculation for premium is based on the whole year, even if they had only started part of the year in.
{'fraction_non_alphanumeric': 0.0378657487091222, 'fraction_numerical': 0.010327022375215147, 'mean_word_length': 4.705882352941177, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '7283450', 'n_tokens_mistral': 143, 'n_tokens_neox': 137, 'n_words': 93}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Filter weavers by version number username_0: Adding the ability to filter which version of a weaver gets loaded by specifying a "WeaverVersion" attribute as part of the weaver's element in the FodyWeavers.xml. Versions numbers can be specified as individual version numbers (including wild cards; "1.2.3.*"), or as a range ("[1.2.3, 2.*)"). This is technically a breaking change for anyone who was previously using this attribute for their own purpose (unlikely). I find the current behavior especially problematic on repositories where the packages directory is not checked in. When switching between branches where different version of the weaver are expected to be used, Fody always grabs the latest version which forces me to clear previous versions of the weaver just to build my projects correctly. This would allow specifying the appropriate version in the FodyWeavers.xml so that the code in each branch of my project would use the appropriate weaver version. <issue_comment>username_1: would it be possible for you to instead update to the new sdk style of csproj? <issue_comment>username_0: So only in some cases is switching an option. I do a lot of WPF work which forces the old csproj. However, I am not sure how switching to the new csproj helps (perhaps I am missing something obvious?). Unless I am not understanding the code correctly, the [AddinFinder](https://github.com/Fody/Fody/blob/master/Fody/AddinFinder/AddinSearchDirectories.cs) simply parses through the various possible directories and [selects the latest](https://github.com/Fody/Fody/blob/master/Fody/AddinFinder/AddinFilesEnumerator.cs#L14) version of any given weaver. The issue I am facing is, at times, I do not want the latest version of my weaver used. Perhaps I am working on fixing a bug on a production branch of code so I want it to use the same weaver version that was used originally (even though there may be multiple versions in my nuget directory). Is there some way to select a particular weaver version using the new csproj? <issue_comment>username_1: the new sdk csproj passes the nuget versions through to the msbuild task. hence the correct nuget is always used <issue_comment>username_1: Unfortunately i cant accept this pr. It leaks in too many implementation details of nuget versioning. This has casue significant pain and bugs in the past. I will however accept a pr that leverages the version of the refence assembly (which is already passed in to fody) to infer the weaver version.
{'fraction_non_alphanumeric': 0.043324143363528946, 'fraction_numerical': 0.005513981882630957, 'mean_word_length': 4.746606334841629, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 2}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '17623918', 'n_tokens_mistral': 639, 'n_tokens_neox': 615, 'n_words': 380}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: HBASE-23259: Populate master address end points in cluster/rs configs… username_0: … (#807) All the clients need to know the master RPC end points while using master based registry for creating cluster connections. This patch amends the test cluster utility to populate these configs in the base configuration object used to spin up the cluster. The config key added here ("hbase.master.addrs") is used in the subsequent patches for HBASE-18095. (cherry picked from commit 834ccb4bf6c22fc2a8aab172490fba75c7a40f1c) <issue_comment>username_0: @apurtell @ndimiduk FYI, for branch-1. <issue_comment>username_0: Ah String.join() not in jdk7. Will force push shortly. Sorry for the noise.
{'fraction_non_alphanumeric': 0.058663028649386086, 'fraction_numerical': 0.05320600272851296, 'mean_word_length': 4.967479674796748, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '25250602', 'n_tokens_mistral': 246, 'n_tokens_neox': 221, 'n_words': 94}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Add support for lambdify with cse username_0: @username_1, you can test this branch by compiling symengine master branch as a shared library and then installing it to the SymEngine.jl/deps/symengine-0.3 directory <issue_comment>username_1: Thank you! Sorry, could you explain how to do it? `Pkg.update()`? Or just copy the changes and paste to my local file? <issue_comment>username_1: I copied the changes, it gives now ``` ERROR: LoadError: libsymengine is too old ``` <issue_comment>username_0: Which OS are you using? <issue_comment>username_1: Windows 10 <issue_comment>username_0: Download https://ci.appveyor.com/api/buildjobs/ghb5857ir7dk36rq/artifacts/symengine_MinGW-w64_x64.zip and copy the `bin/libsymengine.dll` to where `SymEngine.libsymengine` is.
{'fraction_non_alphanumeric': 0.08489388264669163, 'fraction_numerical': 0.02746566791510612, 'mean_word_length': 5.973913043478261, 'pattern_counts': {'":': 0, '<': 7, '<?xml version=': 0, '>': 7, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 1, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '13373200', 'n_tokens_mistral': 269, 'n_tokens_neox': 251, 'n_words': 88}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: QML username_0: Hi there :) Do you plan to use QML for Linux? What do you criticize about it? <issue_comment>username_1: @username_0 do you know much about QML? Do you think it's a good choice for cross-platform targets? I see there is an [OCaml library](https://github.com/Kakadu/lablqml) for it. <issue_comment>username_0: It is built for it. Nokia has bought Qt with the purpose to build the ultimate cross-platform toolkit, the documentation, the implementation, the community, all is built around it. Nokia has then sold it very cheap because they went bankrupt, as we all know. It is also the widest used open source toolkit in that perspective and has [lots](https://doc.qt.io/qtcreator/creator-using-qt-quick-designer.html) of [cool](https://github.com/mikalv/awesome-qt-qml) stuff. Yes, Kakadu has a nice library for it. <issue_comment>username_2: I was wondering about licensing. We'd like to use QT or GTK only for linux (MacOS and Windows have their own libraries.) I know that both are widely in use today. Are you able to provide some more background why one is better than another? <issue_comment>username_0: Yes, MacOS and Windows have their own libraries. They do not look much better, or offer more performance. You cant tell the difference apart, you just pay multitudes of development costs. You can look into this talk, which shows both the benefits of Qt over GTK, as well as Qt`s performance cross platform. Realise that this guy is using Qt Widgets, which is not hardware accelerated and has zero capabilities to run on mobile hardware, like Qt Quick alias QML can do: https://www.youtube.com/watch?v=ON0A1dsQOV0 <issue_comment>username_2: They look, perform, and most importantly feel better than QT. I think we might have miscommunicated the purpose of Brisk somewhere but its goal is indeed providing the cross platform toolkit which communicates with system UI libraries behind the scenes. That said, there is a place for Qt in Brisk - for Linux development. To clarify a bit; we are striving to create a unified API for UI development on various platforms similarly to React Native. It means that on the mac we're binding to AppKit, on iOS to UIKit, on Windows to Windows 10 SDK, Android Views on Android etc. <issue_comment>username_0: You reimplimenting what is already there and that in multiple views. And you are tying to do that with a very small team, where a group of professional developers spend years over years. Qt6 will actually be very similar as your proposal also. https://blog.qt.io/blog/2017/02/06/native-look-feel/ <issue_comment>username_2: Point taken @username_0. It's unclear to me what's the purpose of this issue? Please clarify what's the actionable here. If this was just a question feel free to close it. Thanks! <issue_comment>username_0: It's unclear to me what's the point of this project? As initially asked, I was wondering if you use QML/Qt for Linux. Which includes Android for me. In my personal view - and this is covered by observations of individuals when looking at the different implementations - is the work that Qt has done undisguisable integration of interface elements, compared to their native implementation.<issue_closed> <issue_comment>username_2: The point of this project is to provide a uniform, cross platform library for developing user interfaces which use native widgets behind the scenes. Regarding Linux (but not Android), yes, I believe Qt might be the best choice there. <issue_comment>username_0: And I simply think working together with a company, that also happens to be a huge open source community, makes sense.
{'fraction_non_alphanumeric': 0.05067750677506775, 'fraction_numerical': 0.007317073170731708, 'mean_word_length': 4.435935198821797, 'pattern_counts': {'":': 0, '<': 13, '<?xml version=': 0, '>': 13, 'https://': 5, 'lorem ipsum': 0, 'www.': 1, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '13745292', 'n_tokens_mistral': 990, 'n_tokens_neox': 934, 'n_words': 550}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: πŸ›‘ search.dev.judilibre.io healthcheck is down username_0: In [`5b9c53b`](https://github.com/Cour-de-cassation/judilibre-uptime/commit/5b9c53bbc636c991c8ba4a45c3aeb372518efff2 ), search.dev.judilibre.io healthcheck (https://search.dev.judilibre.io/healthcheck) was **down**: - HTTP code: 200 - Response time: 1340 ms
{'fraction_non_alphanumeric': 0.14, 'fraction_numerical': 0.09714285714285714, 'mean_word_length': 6.3125, 'pattern_counts': {'":': 0, '<': 2, '<?xml version=': 0, '>': 2, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '23453541', 'n_tokens_mistral': 158, 'n_tokens_neox': 139, 'n_words': 21}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: `--hmr` with `enableProdMode()` breaks HMR username_0: <!--πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”… Oh hi there! πŸ˜„ To expedite issue processing please search open and closed issues before submitting a new one. Existing issues often contain information about workarounds, resolution, or progress updates. πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…πŸ”…--> # 🐞 Bug report ### Command (mark with an `x`) <!-- Can you pin-point the command or commands that are effected by this bug? --> <!-- ✍️edit: --> - [ ] new - [ ] build - [x] serve - [ ] test - [ ] e2e - [ ] generate - [ ] add - [ ] update - [ ] lint - [ ] extract-i18n - [ ] run - [ ] config - [ ] help - [ ] version - [ ] doc ### Is this a regression? <!-- Did this behavior use to work in the previous version? --> <!-- ✍️--> Yes, the previous version in which this bug was not present was: .... Not sure. ### Description <!-- ✍️--> A clear and concise description of the problem... Using the new `ng serve --hmr` functionality in Angular 11 while the app was also enabling production mode results in an error while swapping in the modules, and the page falling back to a normal reload with minimal indication that this was happening. I was only able to see what's going on by enabling "Preserve Log" in the Chrome console. A warning in the console if `enableProdMode()` is called while HMR is active might be a better indication about this. ## πŸ”¬ Minimal Reproduction ``` ng serve --hmr ``` ```js enableProdMode() ``` ## πŸ”₯ Exception or Error <pre><code> <!-- If the issue is accompanied by an exception or an error, please share it below: --> <!-- ✍️--> [HMR] Cannot apply update. Need to do a full reload! push.dZZH.module.exports @ polyfills.js:6327 (anonymous) @ polyfills.js:1434 invoke @ polyfills.js:13461 run @ polyfills.js:13220 (anonymous) @ polyfills.js:13954 invokeTask @ polyfills.js:13496 runTask @ polyfills.js:13264 drainMicroTaskQueue @ polyfills.js:13666 Promise.then (async) scheduleMicroTask @ polyfills.js:13649 scheduleTask @ polyfills.js:13485 scheduleTask @ polyfills.js:13307 scheduleMicroTask @ polyfills.js:13327 scheduleResolveOrReject @ polyfills.js:13944 then @ polyfills.js:14076 [Truncated] ... language-service, platform-browser, platform-browser-dynamic ... router, service-worker Ivy Workspace: Yes Package Version ------------------------------------------------------------ @angular-devkit/architect 0.1101.0-next.2 @angular-devkit/build-angular 0.1101.0-next.2 @angular-devkit/core 11.1.0-next.2 @angular-devkit/schematics 11.1.0-next.2 @angular/cdk 11.0.2 @angular/cli 11.1.0-next.2 @angular/flex-layout 9.0.0-beta.31 @angular/material 11.0.2 @angular/material-moment-adapter 11.0.2 @schematics/angular 11.1.0-next.2 @schematics/update 0.1101.0-next.2 rxjs 6.6.3 typescript 4.0.5 </code></pre> <issue_comment>username_0: I realize that the two are incompatible conceptually. But we had an app that was calling `enableProdMode()` even in development without any apparent side-effects, so this can definitely happen in practice to people.<issue_closed>
{'fraction_non_alphanumeric': 0.14424206815511165, 'fraction_numerical': 0.040834312573443006, 'mean_word_length': 2.4393939393939394, 'pattern_counts': {'":': 0, '<': 16, '<?xml version=': 0, '>': 16, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '17922570', 'n_tokens_mistral': 1425, 'n_tokens_neox': 1221, 'n_words': 338}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Using 1.42+ in IntelliJ question username_0: Could you please clarify what you mean, because it doesn't? Do you run your unit tests using IntelliiJ's JUnit runner, or instead you delegate to maven? How do you run individual unit tests? Manually adding the `-javaagent:...` to the Run/Debug configuration for every test is not feasible. Please advise. Also, since this project has no other active venue for asking questions (Google Groups one doesn't seem to be active), please do not close the ticket immediately after answering, just in case there may be any follow-up questions (or, feel free to propose moving the discussion to a different place). Thanks! <issue_comment>username_1: Sure: 1. Since 1.42, the "-javaagent" parameter became mandatory, rendering the `JMockit` JUnit 4 runner redundant and useless. 2. I run tests both from IntelliJ and from Maven (Surefire). I run tests individually, single test class, etc. all the time, from IntelliJ with the "Ctrl+Shift+F10" key combination, the menu items or toolbar buttons. In both cases, the -javaagent parameter is automatically added to the command line (assuming it is properly specified in the pom.xml file, of course). For a fully working example, see the `jmockit1/samples/petclinic` module in the github repository. <issue_comment>username_0: Wrt to `-javaagent`, I guess my question was why the parameter is now mandatory. Thanks! <issue_comment>username_1: The answer to that is supposed to be in #546. <issue_comment>username_0: but to me the question really is why do you need to load the agent in the first place? I guess I'm missing some key points about how JMockit works internally. Thanks! Appreciate your patience! <issue_comment>username_1: Loading the Java agent is a requirement of the JVM, as the only way to obtain a `java.lang.instrument.Instrumentation` object (with which JMockit implements mocking & faking) is to have it passed to the `premain` or `agentmain` method (which are similar to the standard Java `main` method). Unfortunately, there is no `InstrumentionFactory` class in the Java SE APIs.<issue_closed>
{'fraction_non_alphanumeric': 0.05917986952469711, 'fraction_numerical': 0.0097856477166822, 'mean_word_length': 4.882191780821918, 'pattern_counts': {'":': 0, '<': 8, '<?xml version=': 0, '>': 8, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 1}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '19615562', 'n_tokens_mistral': 574, 'n_tokens_neox': 544, 'n_words': 318}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: [v2] Function does not exists for addTelemetryInitializer() username_0: The current readme suggest adding telemetryInitializers using the `appInsights.addTelemetryInitializer(telemetryInitializer);`. This method is not defined in the snippet when building lazy methods. As of now, we have to manually queue the method call. You should create the lazy method proxy in the snippet.<issue_closed> <issue_comment>username_1: Updated in this pr: https://github.com/microsoft/ApplicationInsights-JS/pull/996 With the update, you don't need to manually queue the method.
{'fraction_non_alphanumeric': 0.06435643564356436, 'fraction_numerical': 0.009900990099009901, 'mean_word_length': 5.7444444444444445, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '25417374', 'n_tokens_mistral': 163, 'n_tokens_neox': 153, 'n_words': 68}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: after.data() is partial on updates username_0: My understanding is that `after.data()` in the document snapshot of `onWrite` represents the full state of the document after the write. However (in `3.6.0`) I'm seeing that when a client `update` doesn't include a particular field that already exists in the doc, that field is not in `after.data()`, even though it remains in the document after write. Is this a bug, or a misunderstanding on my part? If the latter, perhaps this could be stated more clearly in the docs. <issue_comment>username_1: Hey @username_0, thanks for filing this. This is working as intended: from this page in the docs https://firebase.google.com/docs/functions/firestore-events?hl=en: ``` // Listen for changes in all documents in the 'users' collection and all subcollections exports.useMultipleWildcards = functions.firestore .document('users/{userId}/{messageCollectionId}/{messageId}') .onWrite((change, context) => { // If we set `/users/marie/incoming_messages/134` to {body: "Hello"} then // context.params.userId == "marie"; // context.params.messageCollectionId == "incoming_messages"; // context.params.messageId == "134"; // ... and ... // change.after.data() == {body: "Hello"} }); ``` The feedback is appreciated tho - its always helpful to know where users are getting confused. I'll pass this along to our tech writer who maintains those docs, and see if theres a good place to clarify this further.<issue_closed>
{'fraction_non_alphanumeric': 0.10546623794212219, 'fraction_numerical': 0.0077170418006430866, 'mean_word_length': 3.773006134969325, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 5, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '28097186', 'n_tokens_mistral': 431, 'n_tokens_neox': 409, 'n_words': 180}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: cargo pkg version username_0: Just wondering if it would make sense to expose the cargo pkg version in help (using something like `env!("CARGO_PKG_VERSION");`) In the interest of simplicity, I wouldn't even need/want to set the version manually. And also in the interest of simplicity, I guess this would make the most sense if version was always-on in help, with no configuration in the macro. The workaround would be to just hardcode the version into a doc comment. I mention this because I hate keeping version numbers in sync within a project, but also find it really useful to see in the help. <issue_comment>username_0: Ah, I see that this project is really meant for "scripts" inside of a larger project. In which case having cargo version doesn't really make sense.<issue_closed> <issue_comment>username_1: I do think we need a better story here. In particular, it should be possible to customize the help message, to add notes, version, etc. At the same time, I am not sure how to best do this. I’d rather let the user to assemble their own help message from pieces rather provide a configurable template. <issue_comment>username_1: I guess, we can have some kind of structured-help feature, which provides help as an array of structs, rather than a rendered string.
{'fraction_non_alphanumeric': 0.044461190655614165, 'fraction_numerical': 0.003014318010550113, 'mean_word_length': 4.487603305785124, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '13058991', 'n_tokens_mistral': 334, 'n_tokens_neox': 320, 'n_words': 211}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: serialize()/deserialize() references problem username_0: It looks like julia doesn't store references correctly. For example, in this snippet, `T.a` is a reference to block/quote of a function. So, after deserializing, this reference will be broken. Look at the example: ```julia type T e::Expr a::Array{Any,1} end ex = :(function f() return true end) t = T(ex, ex.args[2].args) # a refers to function's block io = open("file.dat", "w") serialize(io, t) close(io) io = open("file.dat") t1 = deserialize(io) t1.e.args[2].args === t1.a # false, but should be true, no? close(io) ``` This issue is related to [this one](https://github.com/JuliaLang/JLD.jl/issues/63). <issue_comment>username_1: I don't think it is possible to do this. In fact deserialize shouldn't create any aliasing with the original input.<issue_closed> <issue_comment>username_2: No, this is about shared references within an object. We can handle this in some cases, but not all. For efficiency, we don't track the `args` array of an Expr separately. @username_0 how did this come up? I don't think it should be necessary to persist an Expr and its args array and then mutate them. That seems very strange to me. <issue_comment>username_1: It looks like julia doesn't store references correctly. For example, in this snippet, `T.a` is a reference to block/quote of a function. So, after deserializing, this reference will be broken. Look at the example: ```julia type T e::Expr a::Array{Any,1} end ex = :(function f() return true end) t = T(ex, ex.args[2].args) # a refers to function's block io = open("file.dat", "w") serialize(io, t) close(io) io = open("file.dat") t1 = deserialize(io) t1.e.args[2].args === t1.a # false, but should be true, no? close(io) ``` This issue is related to [this one](https://github.com/JuliaLang/JLD.jl/issues/63). <issue_comment>username_1: Ahh, I didn't realize the RHS of the `===` is using `t1` and not `t`. <issue_comment>username_0: It comes from my backup mechanism. I have to save application state in a file from time to time, because of [these](https://github.com/JuliaLang/julia/issues/15017) crashes. So, after crash it may recover itself from last backup and continue working. I use special tree-like structures (types) for storing all the references inside my AST. This is something like AST meta data (where all variables are, where all functions are and so on). These meta data types contain many references to the original AST expression. <issue_comment>username_2: Shared references to `Expr`s should work. Is it possible to avoid separately referencing the Expr `args` arrays? <issue_comment>username_0: As i understand, I may serialize `ex.args[x]` instead of `ex.args[x].args`. Is it correct? If yes, then it's possible to rewrite my code in this way, thanks.
{'fraction_non_alphanumeric': 0.10471567267683772, 'fraction_numerical': 0.010748959778085992, 'mean_word_length': 4.2550091074681236, 'pattern_counts': {'":': 0, '<': 10, '<?xml version=': 0, '>': 10, 'https://': 3, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '12789301', 'n_tokens_mistral': 918, 'n_tokens_neox': 878, 'n_words': 391}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: The "Add-CIisDefaultDocument" function doesn't exist even though it should. Since carbon 2.10.0 username_0: Our deployment-server (octopus) installs the carbon package automatically from powershell gallery. Now with 2.10.0 (since yesterday in powershell gallery) we are getting (reproducible) this bug ``` NotSpecified: Something unexpected happened. The "Add-CIisDefaultDocument" function doesn't exist even though it should. Here are all functions we know about: (with a long list, can post if relevant) ``` <issue_comment>username_1: 2.10.0 fails to import on PowerShell 4. A fix is coming in the next hour. Sorry! <issue_comment>username_1: OK. 2.10.1 is out. Let me know if you still have problems. <issue_comment>username_1: Discovered that 2.10.1 didn't fix the issue, but figured out the root cause: Carbon fails to import if IIS isn't installed. 2.10.2 will fix this, out sometime today. <issue_comment>username_1: 2.10.2 is out. Let me know if you still have an issue. <issue_comment>username_0: ok we'll try it in the next 2 days. Thanks for your very fast response!<issue_closed> <issue_comment>username_1: Haven't heard back. Assuming fixed. <issue_comment>username_0: Yes sorry, used the previous version longer than expected, than vacations, ... Works for now, thanks for the fix!
{'fraction_non_alphanumeric': 0.07617625093353249, 'fraction_numerical': 0.028379387602688575, 'mean_word_length': 4.87719298245614, 'pattern_counts': {'":': 0, '<': 10, '<?xml version=': 0, '>': 10, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '11531602', 'n_tokens_mistral': 406, 'n_tokens_neox': 375, 'n_words': 180}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Anyone who can merge source code tls_passthrough.py to stickycookies? username_0: I want use tls_passthrough.py in stickycookies file. But I don't know how to make it Anyone who can help me ? Source code: 1. File: tls_passthrough.py ``` from __future__ import (absolute_import, print_function, division) import collections import random from enum import Enum from libmproxy.exceptions import TlsProtocolException from libmproxy.protocol import TlsLayer, RawTCPLayer class InterceptionResult(Enum): success = True failure = False skipped = None class _TlsStrategy(object): """ Abstract base class for interception strategies. """ def __init__(self): # A server_address -> interception results mapping self.history = collections.defaultdict(lambda: collections.deque(maxlen=200)) def should_intercept(self, server_address): """ Returns: True, if we should attempt to intercept the connection. False, if we want to employ pass-through instead. """ raise NotImplementedError() def record_success(self, server_address): self.history[server_address].append(InterceptionResult.success) def record_failure(self, server_address): self.history[server_address].append(InterceptionResult.failure) def record_skipped(self, server_address): self.history[server_address].append(InterceptionResult.skipped) class ConservativeStrategy(_TlsStrategy): """ Conservative Interception Strategy - only intercept if there haven't been any failed attempts in the history. """ def should_intercept(self, server_address): if InterceptionResult.failure in self.history[server_address]: return False return True class ProbabilisticStrategy(_TlsStrategy): """ Fixed probability that we intercept a given connection. """ def __init__(self, p): self.p = p super(ProbabilisticStrategy, self).__init__() def should_intercept(self, server_address): return random.uniform(0, 1) < self.p class TlsFeedback(TlsLayer): """ Monkey-patch _establish_tls_with_client to get feedback if TLS could be established successfully on the client connection (which may fail due to cert pinning). """ def _establish_tls_with_client(self): server_address = self.server_conn.address tls_strategy = self.script_context.tls_strategy try: super(TlsFeedback, self)._establish_tls_with_client() except TlsProtocolException as e: tls_strategy.record_failure(server_address) raise e else: tls_strategy.record_success(server_address) def start(context, argv): if len(argv) == 2: [Truncated] self.shutdown() def handle_request(self, flow): hid = (flow.request.host, flow.request.port) if "cookie" in flow.request.headers: self.stickyhosts[hid] = flow.request.headers.get_all("cookie") elif hid in self.stickyhosts: flow.request.headers.set_all("cookie", self.stickyhosts[hid]) flow.reply() def handle_response(self, flow): hid = (flow.request.host, flow.request.port) if "set-cookie" in flow.response.headers: self.stickyhosts[hid] = flow.response.headers.get_all("set-cookie") flow.reply() config = proxy.ProxyConfig(port=8080) server = ProxyServer(config) m = StickyMaster(server) m.run() ```<issue_closed> <issue_comment>username_1: Subclassing master as in the stickycookies is explicitly not supported anymore for a while. Sorry, but we can't help you with that. Feel free to take a look at the current stickycookie addon and ask further questions in the forums. Thanks!
{'fraction_non_alphanumeric': 0.08118458003574164, 'fraction_numerical': 0.0033188664794485574, 'mean_word_length': 2.985757884028484, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '14205607', 'n_tokens_mistral': 1144, 'n_tokens_neox': 1077, 'n_words': 304}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: (Function component) cannot be used as a JSX component username_0: ``` error TS2786: 'TagIcon' cannot be used as a JSX component. Its return type 'ReactElement<any, any> | null' is not a valid JSX element. 147 <TagIcon/> ~~~~~~~ ``` I'm using `@primer/octicons-react` and I'm aliasing `react` to `jsx-dom` in Webpack config, but TypeScript doesn't seem to automatically accept this: ```tsx import React from 'jsx-dom'; import {TagIcon} from '@primer/octicons-react'; document.body.append(<TagIcon/>) ``` Unless I include this type in my global.d.ts: ```ts declare module 'react' { const FC = (): JSX.Element => JSX.Element; const React = {FC}; export default React; } ``` <issue_comment>username_0: Interestingly, another React package from GitHub (https://github.com/primer/components) doesn't work at all ```tsx import React from 'jsx-dom'; import {SelectMenu} from '@primer/components'; document.body.append(<SelectMenu/>) ``` By default: ```ts error TS2786: 'SelectMenu' cannot be used as a JSX component. Its return type 'ReactElement<any, any> | null' is not a valid JSX element. 72 document.body.append(<SelectMenu/>) ~~~~~~~~~~ ``` With that FC type: ```ts source/features/faster-pr-diff-options.tsx:72:24 - error TS2786: 'SelectMenu' cannot be used as a JSX component. Its return type 'ReactElement<any, any> | null' is not a valid JSX element. Type 'ReactElement<any, any>' is not assignable to type 'ReactElement | null'. Type 'ReactElement<any, any>' is missing the following properties from type 'SVGElement': className, ownerSVGElement, viewportElement, addEventListener, and 217 more. 72 document.body.append(<SelectMenu/>) ~~~~~~~~~~ ``` <issue_comment>username_1: Really good question. Should this be supported? As for me, I don't think so. If we provide our react-module polyfill, we kind of say that `jsx-dom` fully supports **all** its functionality, so any project that uses `React` can use `jsx-dom` as well. But it's not true ---------- For example, you provided a link to the [@primer/components](https://github.com/primer/components) library. And it simply cannot be paired with `jsx-dom` since it's **hugely** dependent on `React`! Some details of a source code of the `SelectMenu` component: ```jsx import React, {useCallback, useEffect, useRef, useState} from 'react' // ... const SelectMenu = React.forwardRef<HTMLElement, SelectMenuInternalProps>( // ... return ( <MenuContext.Provider value={menuProviderValues}> <StyledSelectMenu ref={ref} {...rest} open={open} onToggle={toggle}> {children} </StyledSelectMenu> </MenuContext.Provider> ) ); ``` And its transpiled version: ```js var _react = _interopRequireWildcard(require("react")); // ... const SelectMenu = _react.default.forwardRef(({ // ... return /*#__PURE__*/_react.default.createElement(_SelectMenuContext.MenuContext.Provider, { value: menuProviderValues }, /*#__PURE__*/_react.default.createElement(StyledSelectMenu, _extends({ ref: ref }, rest, { open: open, onToggle: toggle }), children)); ); ``` Does `jsx-dom` have `forwardRef`, `useCallback`, `useEffect`? Nope => We can't just go and create an alias like `react <-> jsx-dom` in this case => `_react.default.createElement` won't create HTMLElement => TypeScript doesn't lie to you and its output really cannot be used as `jsx-dom`'s JSX.Element ------- Considering all of the above, I'm confident that `jsx-dom` **should not** include a react module definition, giving its users the false assurance that it can completely replace React. If you want to pair some React library with `jsx-dom`, it's **your** responsibility to make sure everything is compatible and everything works just fine, not `jsx-dom`'s <issue_comment>username_0: Gotcha, but this is just a JSX type issue. Actual compatibility with the module is secondary and is discovered at runtime. <issue_comment>username_0: Also, the package goes a long way to implement noop React methods specifically to attempt React compatibility. I think it’s totally reasonable to also expect that some basic code (the octicon returns plain SVG JSX) would also be accepted by the types. If it works but the types don’t, then the types are incomplete. If, like on the second case, it doesn’t work, the type problem is secondary, but I hadn’t gotten to actually running the code yet. <issue_comment>username_1: Well, I used the word "compatibility" before, but now I think that the right one would be "similarity", yeah ------ And I also want to cite scientific knowledge (aka "how did others do the same thing") as an argument: lets take a look at Preact which is "React but better" by design. Just like `jsx-dom` it's similar to React, and a lot of *simple* packages like `@primer/octicons-react` won't notice any difference. But in general it's incompatible with React, so there's **no** `module "react"` definition, and, as a result, no false assurance for users that everything "just works". `Preact`'s target is **similarity** with React, not **compatibility**. Just like `jsx-dom`'s, I believe. If we want compatibility, it's possible (I think) make another library like `jsx-dom-compat` (similar to Preact's approach), but it's another long story :) ------ I also can't help but notice how quickly people are changing. You moved from: β€” Why in the world do we need `props`? to β€” I want to take this library, combine it with another library, that targets React, and make everything work really quickly :D <issue_comment>username_0: Wow I don’t even know what you’re doing in this repo. This whole ordeal is hacky and TypeScript isn’t really set up in a way that knows anything other than React for JSX. You know that. The readme knows that. Don’t come telling me β€œwhy are you even using TypeScript.” You just went from not knowing how JSX works to writing tomes about a library you discovered next week. The PR you sent was wrong many times and that’s why I had to reject it and fix it and prove to you every that your assumptions were incorrect. As far as I know I’m done discussing this with you.<issue_closed>
{'fraction_non_alphanumeric': 0.10384313725490196, 'fraction_numerical': 0.0051764705882352945, 'mean_word_length': 3.72646404744255, 'pattern_counts': {'":': 0, '<': 25, '<?xml version=': 0, '>': 29, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '3922894', 'n_tokens_mistral': 1940, 'n_tokens_neox': 1804, 'n_words': 813}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Require a channel when sending messages in the topology username_0: See https://arxiv.org/pdf/1802.02652.pdf. <issue_comment>username_1: Since a channel will be required, how do you plan to expose that configuration? Is the plan to point people to Partisan or provide an alternative as part of this project? Strictly speaking, this is really more of an optional thing rather than required, since the default Erlang distribution channel is sufficient for small clusters - I think it's only in large clusters where congestion becomes a problem. <issue_comment>username_0: We want to change the Firenest.Topology API so that a channel is expected as argument when sending messages. This will allow us to use Partisan efficiently or allow someone to implement better multiplexing than the disterl that uses a single connection. <issue_comment>username_1: That makes sense, but what is the channel abstraction? As far as I'm aware, there is nothing for distributed Erlang, it's just baked into the kernel modules (primarily `:rpc`) - is Firenest planning on exposing a behavior or protocol for other libraries to standardize on, or is there one already available that I'm not aware of? I didn't see anything like that in Partisan, but I skimmed through, so I may have missed it. <issue_comment>username_0: The name "channel" is confusing in our context, so we will need something better. The idea though is to tag messages and we provide ordering guarantee only for messages with the same tag. This allows us to open up multiple connections between nodes. More info on section 3.3.2 in the paper. <issue_comment>username_1: I think it's good terminology, but yeah, in the context of Phoenix it will certainly be confusing. In any case, I think I get it now, in `Firenest.Topology.Erlang`, the channel tag would be ignored, but for a topology based on Partisan, the tag would be passed through to the Partisan API, is that correct? I'm assuming the channel tag would be optional when sending messages using the Firenest API, but required by `Firenest.Topology` implementations, since that appears to be how Partisan works as well (non-tagged messages just use a default channel). <issue_comment>username_0: Yes, precisely. :+1: <issue_comment>username_2: Some other names instead of "Channel": Accurate but probably bad because of its use in different (but related) contexts: - Socket - Pipe Maybe better; all of these attempt to capture the _'messages are ordered when having(/being on) the same X'_: - Plane - Level - Line <issue_comment>username_0: I think I would call it a partition and probably force them to be integers. If we don’t want to impose integers, then I would say the argument is always hashed. -- *JosΓ© Valimwww.plataformatec.com.br <http://www.plataformatec.com.br/>Founder and Director of R&D* <issue_comment>username_0: @username_2 right. so we can just allow them to be any term, which we will hash and send in the available number of underlying connections.<issue_closed>
{'fraction_non_alphanumeric': 0.05121470781352593, 'fraction_numerical': 0.007879185817465528, 'mean_word_length': 4.727443609022556, 'pattern_counts': {'":': 1, '<': 13, '<?xml version=': 0, '>': 13, 'https://': 1, 'lorem ipsum': 0, 'www.': 2, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '26586038', 'n_tokens_mistral': 806, 'n_tokens_neox': 764, 'n_words': 453}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Fix odometry sign error username_0: * Fix odometry sign error * Add warning in code regarding Create 1 odometry issue * Add odom_example.cpp <issue_comment>username_1: I think the inversion of the odometry on the Create 1 is an error. When using create_autonomy and viewing the robot in rviz, the y axis is inverted. Removing the negative from the Create 1 branch of the if statement fixes this behavior. Possibly the sign error bug was only a bug on the Create 2 and it was incorrectly changed for the Create 1 as well?
{'fraction_non_alphanumeric': 0.03231597845601436, 'fraction_numerical': 0.012567324955116697, 'mean_word_length': 4.6938775510204085, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '22691506', 'n_tokens_mistral': 153, 'n_tokens_neox': 139, 'n_words': 89}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Add custom registration page + registration with facebook, linkedin, google & wechat username_0: https://developers.facebook.com/docs/facebook-login/web/ https://developers.google.com/identity/sign-in/web/sign-in https://docs.microsoft.com/en-us/linkedin/consumer/integrations/self-serve/sign-in-with-linkedin https://open.wechat.com/cgi-bin/newreadtemplate?t=overseas_open/docs/web/login/login<issue_closed>
{'fraction_non_alphanumeric': 0.1412556053811659, 'fraction_numerical': 0.002242152466367713, 'mean_word_length': 5.7727272727272725, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 4, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '30713585', 'n_tokens_mistral': 149, 'n_tokens_neox': 144, 'n_words': 16}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: [Request] Custom validations on 'Stream' types username_0: ### Description I've come across a few scenarios where I need to validate specific properties and property combinations on `Stream` instances. I noticed the library does not expose any custom validations for such types so I had to workaround it. I'd like to see simple validations such as: - `.BeWritable` (`stream.CanWrite`) - `.BeReadable` (`stream.CanRead`) - `.BeReadOnly` (`stream.CanRead && !stream.CanWrite`) - `.BeWriteOnly` (`stream.CanWrite && !stream.CanRead`) - `.BeSeekable` (`stream.CanSeek`) As well as potentially more advanced validations such as: - `.HaveContentsEquivalentTo(byte[])` (matches a given `byte[]` for readable streams) ### Expected behavior: ```csharp stream.Should().BeReadOnly(); ``` ### Actual behavior: ```csharp using (new AssertionScope()) { stream.CanRead.Should().BeTrue(); stream.CanWrite.Should().BeFalse(); } ``` ### Versions * Which version of Fluent Assertions are you using? 5.10.3 * Which .NET runtime and version are you targeting? E.g. .NET framework 4.6.1 or .NET Core 2.1. Net Core 3.1 / Net 5 ### Additional Information Related to #961 <issue_comment>username_1: Sounds like a good start to me :+1: For completeness it should also have: * `NotBeSeekable` * `NotBeReadable` * `NotBeWritable` * Writability and readability are independent properties. E.g. an initially writeonly stream being in a state where it has been closed for further writing. It is now not readonly but at the same time not writeonly. It should maybe have * `HaveLength`/`NotHaveLength` * This _may_ throw `NotSupportedException` for non-seekable streams. I'm not sure about the `HaveContentsEquivalentTo` as would require us to make assumptions on _how_ to consume the particular stream. That might make a strong coupling on implementation details. The `StreamAssertions` should be generic in `TStream`, such that it: * retains the compile-time type of the stream and * allows for type specific derivations of `StreamAssertions` <issue_comment>username_0: Can you elaborate what assumptions you are foreseeing? As long as the comparison is with a `byte[]` I don't think there would be much to it, since you can read the bytes from a stream regardless of stream type, right? <issue_comment>username_1: My initial thoughts are: * An infinite `Stream` that can never be fully consumed. * This could be handled by reading at most `expected.Length` bytes. * Failing to read from a write-only `Stream`. * A `Stream` where a using `CopyTo` would give sync-over-async execution. <issue_comment>username_2: I'm looking into this<issue_closed>
{'fraction_non_alphanumeric': 0.09219600725952813, 'fraction_numerical': 0.007622504537205082, 'mean_word_length': 4.151401869158878, 'pattern_counts': {'":': 0, '<': 7, '<?xml version=': 0, '>': 7, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '18256058', 'n_tokens_mistral': 831, 'n_tokens_neox': 787, 'n_words': 338}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: support for Enum type (sqla only) username_0: This is an attempt to support Sqlalchemy Enum types. I don't use Mongo so I'm not sure if there is an equivalent type there. Should resolve #408, I guess. I also used destructuring-bind for a loop over conversion_table to increase readability. (tested with Python 3.4) <issue_comment>username_1: Hi JarosΕ‚aw, This is a great feature, give me some time to run some tests. <issue_comment>username_1: Hi JarosΕ‚aw, Nice work, thanks!
{'fraction_non_alphanumeric': 0.06130268199233716, 'fraction_numerical': 0.01532567049808429, 'mean_word_length': 4.5638297872340425, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '5229052', 'n_tokens_mistral': 162, 'n_tokens_neox': 150, 'n_words': 75}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Extensions can potentially block autocompletion from other extensions username_0: TabNine VSCode plugin suppresses suggestions from other extensions. It's described in https://github.com/zxqfl/tabnine-vscode/issues/6 https://github.com/zxqfl/tabnine-vscode/issues/20 The root cause is described in the comment https://github.com/zxqfl/tabnine-vscode/issues/6#issuecomment-437712095 My feeling that it should be fixed inside VSCode, rather than in the plugin to prevent this behavior for other extensions as well. - VSCode Version: all versions - OS Version: all versions Steps to Reproduce: 1. Install [TabNine extension](https://marketplace.visualstudio.com/items?itemName=TabNine.tabnine-vscode) 2. Suggestions from other extensions stop working Does this issue occur when all extensions are disabled?: No <issue_comment>username_1: Do you also know how? When an extension errors then we don't prevent completions, but when extensions never return a result we do (after what timeout should we stop). Then for trigger characters it gets more tricky. There is a pretty clear note in the docs: https://github.com/microsoft/vscode/blob/bd79c20ac85d44a42d28defb6aa280d5e9ab066b/src/vs/vscode.d.ts#L8167-L8170. Only triggering providers when their character is being hit still seems correct to me and I think disabling certain characters is fair. Remains the question if a trigger that's also a word character should re-trigger suggest? <issue_comment>username_0: @username_1 thanks for getting back. I'm not pretty sure what happens in this particular case. I still would expect completions from other extensions to show up, even if trigger characters are defined as the whole alphabet. It also would be pretty hard to draw the line on what characters should be whitelisted. Are you aware of what is the root cause of the problem? <issue_comment>username_1: Yes, I tried to explain that here: https://github.com/microsoft/vscode/issues/80295#issuecomment-528232991. <issue_comment>username_0: @username_1 I don't think I have enough context about the problem. Do you have somebody in mind who can be pulled into a discussion?<issue_closed>
{'fraction_non_alphanumeric': 0.05863636363636364, 'fraction_numerical': 0.029545454545454545, 'mean_word_length': 4.932614555256064, 'pattern_counts': {'":': 0, '<': 7, '<?xml version=': 0, '>': 7, 'https://': 6, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '11764274', 'n_tokens_mistral': 652, 'n_tokens_neox': 594, 'n_words': 270}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Add units list to CatalogJson username_0: Discussed w/ @sruthipendyala - we thought it is better to include one list of all the units w/ prettyName instead of including prettyName on every single tier block - this also allows the API to be backwards compatible. Reverted the commit that removed unit and added name and prettyName to TierBlockJson, added back prettyName functionality to DefaultUnit, added units to CatalogJson. 🌴 <issue_comment>username_1: πŸ‘
{'fraction_non_alphanumeric': 0.04040404040404041, 'fraction_numerical': 0.00404040404040404, 'mean_word_length': 5.2784810126582276, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '8949853', 'n_tokens_mistral': 128, 'n_tokens_neox': 125, 'n_words': 69}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Specify double quotes in Windows deployment command docs username_0: ## Description The [deployment command instructions](https://docs.api3.org/pre-alpha/guides/provider/deploying-airnode.html#deployment) specify that windows users should replace `$(pwd)` with `%cd%`. This results in the invalid command: ```bash docker run -it --rm ^ --env-file .env ^ --env COMMAND=deploy-first-time ^ -v %cd%:/airnode/out ^ api3/airnode-deployer:pre-alpha ``` `%cd%` should be wrapped in double quotes as `"%cd%"`. ## Steps to reproduce Fails: ```bash docker run -it --rm ^ --env-file .env ^ --env COMMAND=deploy-first-time ^ -v %cd%:/airnode/out ^ api3/airnode-deployer:pre-alpha ``` Good: ```bash docker run -it --rm ^ --env-file .env ^ --env COMMAND=deploy-first-time ^ -v "%cd%":/airnode/out ^ api3/airnode-deployer:pre-alpha ``` <issue_comment>username_1: Weird, I tried the one without quotations myself and know about at least two people who have used it successfully. Does this happen with cmd? <issue_comment>username_0: Had a provider attempt to deploy on windows and got an `Invalid Reference Format` error. It looked like a standard CMD but I can't be sure. Adding the quotes based on the top answer [Here](https://stackoverflow.com/questions/46940191/docker-exe-invalid-reference-format) seemed to fix it. (Also, I don't think have permission to transfer the issue. We can close this and open a new one if you like)<issue_closed> <issue_comment>username_1: I see, it looks like this is only a problem if `%cd%` returns a path with whitespace I created a replacement issue, closing this. Thanks a lot @username_0
{'fraction_non_alphanumeric': 0.11453488372093024, 'fraction_numerical': 0.009883720930232558, 'mean_word_length': 3.676630434782609, 'pattern_counts': {'":': 1, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '12243768', 'n_tokens_mistral': 592, 'n_tokens_neox': 525, 'n_words': 197}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Lexicon viewer username_0: Also see #63. It’d be nice to have an in-game list of all the replacements each lexicon does, in case someone doesn’t like certain replacements. <issue_comment>username_0: Even better would be a toggle for each one (default would be enabled) πŸ‘€
{'fraction_non_alphanumeric': 0.06168831168831169, 'fraction_numerical': 0.012987012987012988, 'mean_word_length': 5.18, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '27022686', 'n_tokens_mistral': 93, 'n_tokens_neox': 85, 'n_words': 44}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Added "ons-dialog" and "ons-alert-dialog" custom elements. username_0: Moved ons.notification and ons.createAlertDialog() and ons.createDialog() on core. @username_1 Merge this changes if there is no problems. <issue_comment>username_1: @username_0 Thanks! I will review it as soon as I can! <issue_comment>username_1: @username_0 Looks great! Merged.
{'fraction_non_alphanumeric': 0.09183673469387756, 'fraction_numerical': 0.015306122448979591, 'mean_word_length': 5.775862068965517, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '30225916', 'n_tokens_mistral': 126, 'n_tokens_neox': 119, 'n_words': 42}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Add support for producing semantics in other IRs username_0: It would make remill more widely useful if it was possible to get instruction semantics without having to rely on lifting from llvm IR. While LLVM IR is nice because of it's wide support, it's not an ideal IR for representing instruction semantics and has poor performance compared to IRs like [cretonne's](https://github.com/stoklund/cretonne). It seems like it wouldn't be too much work to port the existing C++ code to use [Reactor](https://swiftshader.googlesource.com/SwiftShader/+/HEAD/docs/Reactor.md) or something like it. This would allow either direct compilation to llvm IR, runtime generation of llvm IR, or runtime generation of some other IR like radare's ESIL or binaryninja's LLIL. <issue_comment>username_1: Can you elaborate on your specific use case, or the problems you are facing? What are the ideal features that you are looking for in an IR? <issue_comment>username_0: There are a couple of specific use cases I'm thinking of: 1. Doing Generic Value-Set analysis for jump table detection as described here: https://binary.ninja/2017/11/06/architecture-agnostic-function-detection-in-binaries.html 2. Replacing the code here: https://github.com/yegord/snowman/blob/master/src/nc/arch/x86/X86InstructionAnalyzer.cpp 3. Replacing the code here: https://github.com/das-labor/panopticon/blob/master/amd64/src/semantic.rs There's nothing specific about these IRs that I'm looking at (I didn't design them so I don't know the rationale for the author's choices to not use LLVM IR). My request is more about being to interoperate with a given IR without having to make a trip through LLVM IR to get there. <issue_comment>username_1: I'm closing this out for now. It's a pretty intense ask, and going through LLVM is what would enable the best set of optimizations in the first place, I think.<issue_closed>
{'fraction_non_alphanumeric': 0.06535269709543569, 'fraction_numerical': 0.010892116182572614, 'mean_word_length': 4.741071428571429, 'pattern_counts': {'":': 0, '<': 6, '<?xml version=': 0, '>': 6, 'https://': 5, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '25581446', 'n_tokens_mistral': 552, 'n_tokens_neox': 511, 'n_words': 260}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Keyboard events get captured for the whole document when using SDL username_0: I added a textarea to a document with an Emscripten SDL canvas no it today, and wasn't able to input text in it. I `grep`ed through the generated .js and found [this](https://github.com/kripken/emscripten/blob/46d00a9b4f825955cbc5588285db92d237525074/src/library_sdl.js#L1224). It's nice and also sufficient for what I do at the moment, that I can just set `Module.doNotCaptureKeyboard` to true to make the textarea work, though I don't think this is the best solution. Wouldn't it be possible to (optionally?) add event listeners for `Module.canvas` instead of the whole document? <issue_comment>username_1: What is the current standard here?
{'fraction_non_alphanumeric': 0.06455862977602109, 'fraction_numerical': 0.04743083003952569, 'mean_word_length': 5.333333333333333, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '23083723', 'n_tokens_mistral': 244, 'n_tokens_neox': 220, 'n_words': 102}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Windows 10 - APP username_0: Hello, I am huge fan of Pixiv, and I need Pixiv GUI downloader, but I only found console 100% working, but I prefer GUI. And This app not work in Windows 10, when I tried it in Virtual Machine it works well without errors, everything its good (quality, speed), (I used Windows 7 in Virtual Machine). Thanks
{'fraction_non_alphanumeric': 0.05945945945945946, 'fraction_numerical': 0.024324324324324326, 'mean_word_length': 4.621212121212121, 'pattern_counts': {'":': 0, '<': 2, '<?xml version=': 0, '>': 2, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 1, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '5145352', 'n_tokens_mistral': 107, 'n_tokens_neox': 93, 'n_words': 62}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: 257/preview permission username_0: This pr is aiming to fix the permission issue on preview tab. Current progress: - If permission = 'none', the preview tab will not render. - If permission = 'write' or 'owner', the preview tab will render along with edit functionality. WIP: - If permission = 'read', we should hide the edit functionality. This is a little tricky, as the preview section is embedded and it doesn't know about the permission. <issue_comment>username_1: yeps, please # it. <issue_comment>username_0: done
{'fraction_non_alphanumeric': 0.0743362831858407, 'fraction_numerical': 0.010619469026548672, 'mean_word_length': 4.549019607843137, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '9974895', 'n_tokens_mistral': 154, 'n_tokens_neox': 148, 'n_words': 75}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Support zooming username_0: What would be the best approach to adding zooming functionality? Thanks! <issue_comment>username_1: You can put `ACEDrawingView` inside a `UIScrollView`, and enable zooming. The trick is to set `scrollView.panGestureRecognizer.minimumNumberOfTouches = 2;`, in order to let one-finger gestures be used for drawing only. Very straightforward, the big downside is that all strokes will suffer a slight delay (the scroll view needs time to decide if you’re actually using two finders). <issue_comment>username_2: That's an interesting idea.... a lot of people are requesting for zoom capabilities. Do you mind to edit the demo app with this approach? I think a lot of people will appreciate it Thanks <issue_comment>username_1: Sure, it’s really easy to do. The only thing is, like I mentioned, it does add a delay to every path you draw. It’s a small delay, but noticeable. So I think it’s best to keep the default demo app as-is, maybe add a button to switch to a zoomable mode? Or document it in the readme? <issue_comment>username_3: Hello username_1, do you finish this feature? <issue_comment>username_4: Can we get the sample. Thanks. <issue_comment>username_5: I put ACEDrawingView inside a UIScrollView and set scrollView.panGestureRecognizer.minimumNumberOfTouches = 2, it is able to zoom, but still draw a little before it zooms. <issue_comment>username_6: @username_5 Any solutions found for removing that drawing before it zooms?
{'fraction_non_alphanumeric': 0.05416116248348745, 'fraction_numerical': 0.007926023778071334, 'mean_word_length': 5.06, 'pattern_counts': {'":': 0, '<': 9, '<?xml version=': 0, '>': 9, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '25496420', 'n_tokens_mistral': 428, 'n_tokens_neox': 409, 'n_words': 215}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Loader not working on IE11 username_0: Hi, i am currently extending an existing web application with a cornerstone viewer to show nifti files. Everything seems to work perfectly fine with Chrome, but IE11 is giving an error that 'Symbol' is undefined. The same issue is also present in the example included with this library. (examples/orientation-check.html) It works when opened with Chrome, but not IE11. I know 'Symbol' is something new in es6, though i'm not sure exactly what it does. I do use the es6-shim.js polyfill so i can use certain es6 functionality like Promises, but it appears that this does not support Symbol. I'm guessing it would work fine if my project would use something like typescript + core-js (which i have used before), but as i'm just extending an existing project, i really don't want to change the whole build process. Would there be an easy way to get this to work with IE11? Looking at the source, Symbol seems to be used as a different way of declaring methods? Could i just... change the source from, for example, "[determineMetaData] () {...}" to "function determineMetaData () {...}" and remove the symbols? Or are these things more complicated than they look at first sight? Or is there maybe some other way of polyfilling them that i don't know of yet? <issue_comment>username_1: Tagging @fegemo because he originally added the Symbol usage. I think it was meant as a way to make the methods private. You can probably polyfill Symbol. I don't think there's any issue with removing it. <issue_comment>username_2: Dropping this into your application should be a quick way to see if it's polyfillable: `<script src="https://cdn.polyfill.io/v2/polyfill.min.js"></script>` Docs: https://polyfill.io/v2/docs/ ![image](https://user-images.githubusercontent.com/5797588/50777234-09652380-1269-11e9-863e-6d00ac5098b8.png) <issue_comment>username_0: Yes, thank you very very much. I never heard of polyfill.io before. We don't want our project to rely on a third party service, but i managed to use their API to create and download a polyfill file containing the minimum es6 functionality, including Symbols, that we need to get IE11 to work properly.
{'fraction_non_alphanumeric': 0.06428571428571428, 'fraction_numerical': 0.027232142857142858, 'mean_word_length': 4.374100719424461, 'pattern_counts': {'":': 0, '<': 7, '<?xml version=': 0, '>': 7, 'https://': 3, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 1, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '28677916', 'n_tokens_mistral': 639, 'n_tokens_neox': 587, 'n_words': 330}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: `create --local` should NOT trigger build username_0: When a local component is created the build shouldn't be triggered. It should just create a component and wait for the user to run `ocdev push` to build component. If the component is created from remote git than it is ok to start building component right away. <issue_comment>username_1: When the component is created: - A build configuration should be created as binary build - There should be no triggers happening (if they happen automatically) don't create with triggers and add them after in a second action - If trigger was removed (add it) - Give some hint about the component being created (and need to run push to start a build) When push happens: - Start a binary build from the local directory (or file) - Stream the logs from the build There's some discussion on related topic here: https://github.com/openshift/origin/issues/15429<issue_closed>
{'fraction_non_alphanumeric': 0.04979253112033195, 'fraction_numerical': 0.007261410788381743, 'mean_word_length': 4.273224043715847, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '30107016', 'n_tokens_mistral': 243, 'n_tokens_neox': 235, 'n_words': 142}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Track z3/isSubnormal issue username_0: Due to this bug: https://github.com/Z3Prover/z3/issues/10, The following does not work: ``` sat isSubnormalFP ``` As it produces `+0.0` Test this again when the z3 bug is fixed. <issue_comment>username_0: Also track https://github.com/Z3Prover/z3/issues/13 <issue_comment>username_0: Upgraded Z3 this morning and this is no longer an issue: ``` Prelude Data.SBV> sat isSubnormalFP Satisfiable. Model: s0 = 1.0e-323 :: SDouble ``` which is indeed a denormal number!<issue_closed>
{'fraction_non_alphanumeric': 0.10782608695652174, 'fraction_numerical': 0.03826086956521739, 'mean_word_length': 4.1891891891891895, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 6, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '3069898', 'n_tokens_mistral': 219, 'n_tokens_neox': 201, 'n_words': 61}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Boss Mechanik 2 - Books username_0: chalanges the player to predict movement of the enemy to feel smart when he predicts it and can doge. uses the jump funktion and trains jump timing of the player - [ ] books charge in the direction of the last player position - [ ] books spawn on the backround books position and fly in the air for a while - [ ] books have health and can die - [ ] books die after they charaged once<issue_closed>
{'fraction_non_alphanumeric': 0.04670912951167728, 'fraction_numerical': 0.004246284501061571, 'mean_word_length': 3.968421052631579, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '10818678', 'n_tokens_mistral': 125, 'n_tokens_neox': 120, 'n_words': 73}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: [issue] metrics endpoint shows matched: n/a despite console output showing matches username_0: **Describe the bug** Whilst running a nuclei scan with the `-metrics` flag, we noticed that it says `"matched": "n/a",` despite the console showing `Matched: 14`. **Nuclei version** 2.3.1 built from `39d57ea` **Screenshot of the error or bug** ![2021-03-21_10-40](https://user-images.githubusercontent.com/466878/111911078-dda27280-8a5b-11eb-996f-8b2691f60c3f.png) ![2021-03-21_10-41](https://user-images.githubusercontent.com/466878/111911101-f6128d00-8a5b-11eb-8070-f61d71b19a64.png)<issue_closed>
{'fraction_non_alphanumeric': 0.13862928348909656, 'fraction_numerical': 0.16510903426791276, 'mean_word_length': 5.242718446601942, 'pattern_counts': {'":': 1, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 2, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '618904', 'n_tokens_mistral': 304, 'n_tokens_neox': 244, 'n_words': 52}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: How to detect audio & video devices with Client Application username_0: Hi, I am new to JavaScript. I want to get all audio & video source in client application (means without any web server e.g. apache). I used getSources API to get all devices but it doesnt work without web-server (apache server). Could you please guide me hwo to get all local devices in client application (without any webserver)? Thanks in Advance. <issue_comment>username_1: Hi @username_0 Use [Detect.js](https://github.com/username_2/WebRTC-Experiment/tree/master/DetectRTC) <issue_comment>username_2: @username_0 You either need to enable this command line flag: [`allow-file-access-from-files`](http://kurtextrem.github.io/ChromiumFlags/#allow-file-access-from-files) It'll allow invocations of APIs, e.g. getUserMedia, from `file://` protocol as well. By default, such APIs works only with HTTP and HTTPs. `MediaStreamTrack.getSources` are javascript API. These API works in the context of the browsers. You are simply asked to load HTML page on HTTP or HTTPs or, otherwise, enable command-line flag that is mentioned above. <issue_comment>username_0: Hi Muaz, Thank you so much for the reply. I started Chrome with the flag ">"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --allow-file-access-from-files" and tested below example that is working with web-server. But it is not working with a locally-loaded HTML file (without web-server, file:///whatever/index.html) Example: <!DOCTYPE html> <html> <body> <button type="button" onclick="myFunction()">Try it</button> <script type="text/javascript"> function myFunction(){ navigator.getUserMedia = navigator.webkitGetUserMedia || navigator.getUserMedia; if (typeof MediaStreamTrack === 'undefined'){ alert('This browser does not support MediaStreamTrack.\n\nTry Chrome Canary.'); } else { MediaStreamTrack.getSources(gotSources); } } function gotSources(sourceInfos) { for (var i = 0; i != sourceInfos.length; ++i) { var sourceInfo = sourceInfos[i]; var option = document.createElement("option"); option.value = sourceInfo.id; if (sourceInfo.kind === 'audio') { console.log('Got Audio Device '); } else if (sourceInfo.kind === 'video') { console.log('Got Video Device '); } else { console.log('Some other kind of source: ', sourceInfo); } } } </script> </body> </html><issue_closed> <issue_comment>username_0: Hi, I am new to JavaScript. I want to get all audio & video source in client application (means without any web server e.g. apache). I used getSources API to get all devices but it doesnt work without web-server (apache server). Could you please guide me hwo to get all local devices in client application (without any webserver)? Thanks in Advance.
{'fraction_non_alphanumeric': 0.10236768802228412, 'fraction_numerical': 0.00383008356545961, 'mean_word_length': 4.005226480836237, 'pattern_counts': {'":': 0, '<': 16, '<?xml version=': 0, '>': 17, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '17320118', 'n_tokens_mistral': 849, 'n_tokens_neox': 818, 'n_words': 317}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Feature: addtional options username_0: It would be great, if one could provide additional options to the lessc command, e.g. also via a config option. <issue_comment>username_1: I've been testing this, but it seems changes to these settings require a restart. Not sure why that is. <issue_comment>username_1: Closed in `v0.5.0`<issue_closed>
{'fraction_non_alphanumeric': 0.07446808510638298, 'fraction_numerical': 0.015957446808510637, 'mean_word_length': 5.732142857142857, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '18130130', 'n_tokens_mistral': 110, 'n_tokens_neox': 108, 'n_words': 49}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Throwing toLowerCase of null username_0: I have a service that hits elasticsearch and brings back arrays of data. Searching works fine but sometimes just by searching text it throws the following error: CompleterCmp.html:17 ERROR TypeError: Cannot read property 'toLowerCase' of null. The line 17: <div class="completer-item-text" [ngClass]="{'completer-item-text-image': item.image || item.image === '' }"> Note that search still works and still brings back array of objects but the dropdown stops working. Any ideas on what might be causing this? <issue_comment>username_0: Fixed by checking for nulls and ignoring them or setting to random value.<issue_closed>
{'fraction_non_alphanumeric': 0.07323943661971831, 'fraction_numerical': 0.008450704225352112, 'mean_word_length': 5.076923076923077, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '23448458', 'n_tokens_mistral': 193, 'n_tokens_neox': 183, 'n_words': 90}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: make_cache_key doesn't consider Accept header username_0: If your flask app returns different content based on requested format using Accept header, then wrong content format may get cached. For example xml for json, or json for xml. Here's a fix I make in flask_cache/__init__py which may not be the right fix, but communicates the issue. def make_cache_key(*args, **kwargs): if callable(key_prefix): return key_prefix() enc = request.headers['Accept'] + '~' if '%s' in key_prefix: return enc + (key_prefix % request.path) return enc + key_prefix I realize I can write my own function @cache.cached(key_prefix=my_own_method) but in practice this is something you want to do site wide.
{'fraction_non_alphanumeric': 0.06191588785046729, 'fraction_numerical': 0.0011682242990654205, 'mean_word_length': 2.483739837398374, 'pattern_counts': {'":': 0, '<': 2, '<?xml version=': 0, '>': 2, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 2}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '13582940', 'n_tokens_mistral': 228, 'n_tokens_neox': 212, 'n_words': 96}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Chronograf makes references to InfluxDB OSS -- when it should be InfluxDB username_0: ###### URL for relevant page? example: https://docs.influxdata.com/chronograf/v1.6/guides/monitoring-influxenterprise-clusters/#step-4-explore-the-monitoring-data-in-chronograf The functionality presented isn't necessarily specific to InfluxDB OSS -- it equally applies to InfluxDB Enterprise as well. We should comb through the Chronograf docs to call out where there are differences specifically between the Enterprise and OSS Data Sources. But, otherwise, leave the reference "generic." <issue_comment>username_1: In the linked example, I think the doc actually intentionally specified InfluxDB OSS here because it's a tutorial on how to set up InfluxDB OSS to monitor an InfluxDB Enterprise cluster...let me know if I'm misunderstanding though. I ran a search in the rest of the Chronograf (v1.6) docs and didn't see any other examples that needed changing. <issue_comment>username_2: Closing since not an issue.<issue_closed>
{'fraction_non_alphanumeric': 0.0640904806786051, 'fraction_numerical': 0.007540056550424128, 'mean_word_length': 5.0685714285714285, 'pattern_counts': {'":': 0, '<': 5, '<?xml version=': 0, '>': 5, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '28457875', 'n_tokens_mistral': 292, 'n_tokens_neox': 274, 'n_words': 129}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: serviceDetails cannot find package.json when hosted in iisnode on Windows username_0: When hostied on Windows in IIS in iisnode, `process.mainModule` isn't the application. Instead iisnode's `interceptor.js` is started as main, which then loads the application. This means `process.mainModule.paths` cannot be used for where to look for `package.json`. ##### How to fix If we're on iisnode we can use the fact that current working directory is set to the application's and we look for `package.json` there. <issue_comment>username_0: I have a fix for this, however the integration tests are failing (timeouts).
{'fraction_non_alphanumeric': 0.06891271056661562, 'fraction_numerical': 0.0030627871362940277, 'mean_word_length': 4.945454545454545, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '18674019', 'n_tokens_mistral': 180, 'n_tokens_neox': 170, 'n_words': 91}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Request for a sanity check username_0: I cannot affect the schema as this is a read only database. We have unix timestamps that I'd like to perform a where between query on. I get valid results from mysql itself but through this I get absolutely nothing not even an error. Would any be willing to attempt a select {something} from {somewhere} where timestamp between {some lower value} and {some upper limit}; ? I believe it is failing in this package. ie `SELECT * FROM actions WHERE stamp BETWEEN '1000000000' AND '1567630955' ORDER BY stamp DESC LIMIT 100;` Thank you for your attention. <issue_comment>username_1: Hi @username_0 sorry you're having trouble. The MySQL `BETWEEN` can have various results depending on the types of both the arguments you have provided it (strings in your example) and the type of the columns. Without the DDL of your table, I cannot say what the expected out come is for your case when providing stings to `BETWEEN`. How `BETWEEN` works is documented in the MySQL manual here: https://dev.mysql.com/doc/refman/8.0/en/comparison-operators.html#operator_between as well as you can find the rules for how it works with mis-matched types here: https://dev.mysql.com/doc/refman/8.0/en/type-conversion.html You say they are INTs, so if you are providing strings, typically the MySQL server returns no results in that situation. Have you tried using plain numbers in your `BETWEEN`? For example: ```sql SELECT * FROM actions WHERE stamp BETWEEN 1000000000 AND 1567630955 ORDER BY stamp DESC LIMIT 100; ``` <issue_comment>username_0: Yes I have, I have also read the documentation. I can also run the command outside in the cli, aswell as all other querys I'v needed. <issue_comment>username_1: That is very strange indeed. Sorry you're having trouble. Would you be able to add `debug: true` to your code (see https://github.com/mysqljs/mysql#debugging-and-reporting-problems) and then run that query and paste all the output it produces here? Hopefully that should be enough to see what is going on. <issue_comment>username_0: Unfortunately I could only take my work home in spirit tonight. I take it you have no issue with a between on unix timestamp? <issue_comment>username_1: There is no such column type as a UNIX timestamp in MySQL that I am aware of. What is the exact type of the MySQL column you are trying to use `BETWEEN` on? <issue_comment>username_0: they are usually stored as ints <issue_comment>username_1: So if it is an `INT`, no, I have never had an issue using this module to do `BETWEEN` on INT columns. <issue_comment>username_0: and have you tested 2.17.1? <issue_comment>username_1: Yes, I use this module in several of my own product apps, always the latest version. <issue_comment>username_0: I'll need to see if there is any debug output for more information. Have you found any common incompatabilitys with node and datatypes for instance? Traps to step into? I'm most perplexed I get no feedback what might be happening, no errors, everything perfect around the specific query, query working outside the application. Going mad, thank you. <issue_comment>username_1: I just tried right now with 2.17.1 of this module in MySQL server 5.7.27 the following, with no issue: ```js var conn = mysql.createConnection({ /* my details */); conn.query('CREATE TEMPORARY TABLE `issue_2265` (id INT PRIMARY KEY)'); conn.query('INSERT INTO issue_2265 (id) VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9)'); conn.query('SELECT * FROM issue_2265 WHERE id BETWEEN 4 AND 8', function (err, rows) { if (err) throw err; console.dir(rows); }); ``` and the output is: ``` [ RowDataPacket { id: 4 }, RowDataPacket { id: 5 }, RowDataPacket { id: 6 }, RowDataPacket { id: 7' }, RowDataPacket { id: 8 } ] ``` I'm not aware of any incompatibilities with data types. This module just implements the MySQL protocol. Under the hood, that protocol simply sends your query as a text string to the MySQL server and the server is what parses and runs the query. Then it sends back the row results the same as it would for any query, no matter what the `WHERE` looked like. <issue_comment>username_0: Thats the expectation and what I love about this module. I really appriciate you taking a moment to look at this with me this late. I'm up against a milestone and would really hate life replacing. :( Tomorrows gunna be a long day. <issue_comment>username_1: It's no problem. I wish I knew what was wrong to just have solution for you :( <issue_comment>username_1: But yea, just thinking of things I would check if I were in your shoes, when you are back in the grind: If you add `debug: true` to your connection, you can watch the output. Check that (a) you see the query you expect to see with the BETWEEN and then see what the results back are for it Then, based on what that displayed, I could see three different paths: (a) You saw no BETWEEN or the wrong query -- then run though the logic that should be sending / constructing said query, as there may be an issue there (b) You saw the query and there were results shown -- then run though the logic that shuttles the results from this module to where you're expecting to see them, as there may be an issue there (c) You saw the query but it displayed there were no results -- validate that it is going against the expected database / user / server. That may not be the only things, but hopefully that list will help out when you get back to it. There is certainly a possibility there is a bug in this module too -- I'm definately not going to rule it out, as you never really know. <issue_comment>username_0: Expecting 7 entrys from query, confirmed working when sent as plain query to cli. Unable to potentially leak sensitive information. Strong suspicion that the large ints were creating this issue. Confirmed that is not the issue. Query confirmed as composed correctly with three other partys. Server confirmed as correct. Is it possible its being flagged as a compound statement with an AND? [sendlog.txt](https://github.com/mysqljs/mysql/files/3581374/sendlog.txt) <issue_comment>username_1: Hi @username_0 thanks for that log. I assume that last query with the `BETWEEN` is the one in question? I'm not sure what was in the `...` spot that was cut out, but I guess just 7 more `FieldPacket`s, right? If so, the server itself is saying there are no results, so if that query sent is the correct query, then there is nothing this module can do in this case to correct the issue, as it is not sending back any results. This is because in the response sequence, you get the `ResultSetHeaderPacket`, which signals how many fields are in the results (8), then that many `FieldPacket`s next, then the `EofPacket` to signal the end of fields, then the number of `RawDataPackets` for each of the result rows, then then `EofPacket` to signal the end of results. Zero result rows would be inticated by the server with the double EofPackets at the end, as you are seeing. You can find this flow diagramed out in the MySQL server protocol documentation from Oracle here: https://dev.mysql.com/doc/internals/en/com-query-response.html#packet-COM_QUERY_Response I'm not sure there is anything further I can say here besides the logs are showing that at the protocol level your MySQL server is returning zero result rows. Probably the only way to really more forward here is perhaps you can provide all relevant instructions such that I can get a replication of this set up on my end to just trial and error to determine what is happening. It could even be some kind of arcane bug in the MySQL server itself, I'm not sure. How the other clients with the exact same query are getting results are beyond me. Perhaps a network-level packet capture of the transaction between one of those other clients and the MySQL server would provide some answers. <issue_comment>username_0: I goofed and failed to sanitize two words of the log, but on deeper inspection of the logs I see no hints toward an issue. Appears as a valid query without any results. But because I don't see any way to even ask for help at this point Im going to unclutter your issue board with this item.<issue_closed> <issue_comment>username_0: Going to request a migrational field with proper date format and see if I can work with that. The size of the query without a range is simply too large to handle sort and filter client side. <issue_comment>username_2: i sorry to make issue here, but question is really hard to me. question is: **mysql explode string into one column** <issue_comment>username_3: @username_2 try concat() . Look under String- functions in the MySQL documentation.
{'fraction_non_alphanumeric': 0.05596246280613413, 'fraction_numerical': 0.014190890363927672, 'mean_word_length': 4.328658536585366, 'pattern_counts': {'":': 0, '<': 23, '<?xml version=': 0, '>': 23, 'https://': 5, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 2, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '8384372', 'n_tokens_mistral': 2391, 'n_tokens_neox': 2247, 'n_words': 1382}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Syntaxnet Installation Failure OSX username_0: ### System information - **What is the top-level directory of the model you are using**: syntaxnet - **Have I written custom code (as opposed to using a stock example script provided in TensorFlow)**: no - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: OSX El Capitan version 10.11.6 - **TensorFlow installed from (source or binary)**: source - **TensorFlow version (use command below)**: ('v1.3.0-rc2-20-g0787eee', '1.3.0') - **Bazel version (if compiling from source)**: 0.6.1-homebrew - **CUDA/cuDNN version**: n/a - **GPU model and memory**: n/a - **Exact command to reproduce**: ``` git clone --recursive https://github.com/tensorflow/models.git cd models/research/syntaxnet/tensorflow ./configure cd .. bazel test --linkopt=-headerpad_max_install_names \ dragnn/... syntaxnet/... util/utf8/... ``` ### Describe the problem //dragnn/python:graph_builder_test fails ### Source code / logs ``` exec ${PAGER:-/usr/bin/less} "$0" || exit 1 ----------------------------------------------------------------------------- WARNING:tensorflow:tf.op_scope(values, name, default_name) is deprecated, use tf.name_scope(name, default_name, values) /private/var/tmp/_bazel_username_0/84f257278b69232e5174bb7ca894d15f/execroot/__main__/bazel-out/darwin_x86_64-opt/bin/dragnn/python/graph_builder_test.runfiles/org_tensorflow/tensorflow/python/ops/gradients_impl.py:95: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. "Converting sparse IndexedSlices to a dense Tensor of unknown shape. " WARNING:tensorflow:tf.op_scope(values, name, default_name) is deprecated, use tf.name_scope(name, default_name, values) .WARNING:tensorflow:tf.op_scope(values, name, default_name) is deprecated, use tf.name_scope(name, default_name, values) WARNING:tensorflow:tf.op_scope(values, name, default_name) is deprecated, use tf.name_scope(name, default_name, values) .WARNING:tensorflow:tf.op_scope(values, name, default_name) is deprecated, use tf.name_scope(name, default_name, values) .WARNING:tensorflow:tf.op_scope(values, name, default_name) is deprecated, use tf.name_scope(name, default_name, values) .WARNING:tensorflow:tf.op_scope(values, name, default_name) is deprecated, use tf.name_scope(name, default_name, values) WARNING:tensorflow:tf.op_scope(values, name, default_name) is deprecated, use tf.name_scope(name, default_name, values) WARNING:tensorflow:tf.op_scope(values, name, default_name) is deprecated, use tf.name_scope(name, default_name, values) WARNING:tensorflow:tf.op_scope(values, name, default_name) is deprecated, use tf.name_scope(name, default_name, values) 2017-10-10 22:10:42.149270: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.2 AVX AVX2 FMA 2017-10-10 22:10:42.314622: I dragnn/core/ops/dragnn_op_kernels.cc:79] Creating new ComputeSessionPool in container handle: simple-parser 2017-10-10 22:10:42.314749: I syntaxnet/embedding_feature_extractor.cc:35] Features: input(-1).word input(-2).word input(-3).word input.word input(1).word input(2).word input(3).word 2017-10-10 22:10:42.314767: I syntaxnet/embedding_feature_extractor.cc:36] Embedding names: words 2017-10-10 22:10:42.314773: I syntaxnet/embedding_feature_extractor.cc:37] Embedding dims: 64 2017-10-10 22:10:42.316716: I syntaxnet/term_frequency_map.cc:101] Loaded 3 terms from dragnn/core/testdata/syntaxnet_tagger.word-map. 2017-10-10 22:10:42.317860: I syntaxnet/term_frequency_map.cc:101] Loaded 46 terms from dragnn/core/testdata/syntaxnet_tagger.label-map. 2017-10-10 22:10:42.317888: I syntaxnet/embedding_feature_extractor.cc:35] Features: stack.focus stack(1).focus 2017-10-10 22:10:42.317896: I syntaxnet/embedding_feature_extractor.cc:36] Embedding names: rnn 2017-10-10 22:10:42.317901: I syntaxnet/embedding_feature_extractor.cc:37] Embedding dims: 32 WARNING:tensorflow:******************************************************* WARNING:tensorflow:TensorFlow's V1 checkpoint format has been deprecated. WARNING:tensorflow:Consider switching to the more efficient V2 format: WARNING:tensorflow: `tf.train.Saver(write_version=tf.train.SaverDef.V2)` WARNING:tensorflow:now on by default. WARNING:tensorflow:******************************************************* 2017-10-10 22:10:43.336889: I dragnn/core/compute_session_pool.cc:55] Destroying pool: total number of sessions created = 1 .WARNING:tensorflow:tf.op_scope(values, name, default_name) is deprecated, use tf.name_scope(name, default_name, values) WARNING:tensorflow:tf.op_scope(values, name, default_name) is deprecated, use tf.name_scope(name, default_name, values) WARNING:tensorflow:tf.op_scope(values, name, default_name) is deprecated, use tf.name_scope(name, default_name, values) WARNING:tensorflow:tf.op_scope(values, name, default_name) is deprecated, use tf.name_scope(name, default_name, values) 2017-10-10 22:10:44.957363: I dragnn/core/ops/dragnn_op_kernels.cc:79] Creating new ComputeSessionPool in container handle: simple-tagger 2017-10-10 22:10:44.957433: I syntaxnet/embedding_feature_extractor.cc:35] Features: input(-1).word input(-2).word input(-3).word input.word input(1).word input(2).word input(3).word 2017-10-10 22:10:44.957450: I syntaxnet/embedding_feature_extractor.cc:36] Embedding names: words 2017-10-10 22:10:44.957455: I syntaxnet/embedding_feature_extractor.cc:37] Embedding dims: 64 2017-10-10 22:10:44.958223: I syntaxnet/term_frequency_map.cc:101] Loaded 3 terms from dragnn/core/testdata/syntaxnet_tagger.word-map. 2017-10-10 22:10:44.960177: I syntaxnet/term_frequency_map.cc:101] Loaded 49 terms from dragnn/core/testdata/syntaxnet_tagger.tag-map. 2017-10-10 22:10:44.960208: I syntaxnet/embedding_feature_extractor.cc:35] Features: stack.focus 2017-10-10 22:10:44.960212: I syntaxnet/embedding_feature_extractor.cc:36] Embedding names: rnn 2017-10-10 22:10:44.960215: I syntaxnet/embedding_feature_extractor.cc:37] Embedding dims: 32 WARNING:tensorflow:******************************************************* WARNING:tensorflow:TensorFlow's V1 checkpoint format has been deprecated. WARNING:tensorflow:Consider switching to the more efficient V2 format: WARNING:tensorflow: `tf.train.Saver(write_version=tf.train.SaverDef.V2)` WARNING:tensorflow:now on by default. WARNING:tensorflow:******************************************************* 2017-10-10 22:10:45.665411: I dragnn/core/compute_session_pool.cc:55] Destroying pool: total number of sessions created = 1 .WARNING:tensorflow:tf.op_scope(values, name, default_name) is deprecated, use tf.name_scope(name, default_name, values) WARNING:tensorflow:tf.op_scope(values, name, default_name) is deprecated, use tf.name_scope(name, default_name, values) WARNING:tensorflow:tf.op_scope(values, name, default_name) is deprecated, use tf.name_scope(name, default_name, values) [Truncated] File "/private/var/tmp/_bazel_username_0/84f257278b69232e5174bb7ca894d15f/execroot/__main__/bazel-out/darwin_x86_64-opt/bin/dragnn/python/graph_builder_test.runfiles/__main__/dragnn/python/graph_builder.py", line 353, in <lambda> lambda: comp.build_greedy_training(*args))) File "/private/var/tmp/_bazel_username_0/84f257278b69232e5174bb7ca894d15f/execroot/__main__/bazel-out/darwin_x86_64-opt/bin/dragnn/python/graph_builder_test.runfiles/__main__/dragnn/python/component.py", line 393, in build_greedy_training with tf.control_dependencies([tf.assert_equal(self.training_beam_size, 1)]): File "/private/var/tmp/_bazel_username_0/84f257278b69232e5174bb7ca894d15f/execroot/__main__/bazel-out/darwin_x86_64-opt/bin/dragnn/python/graph_builder_test.runfiles/org_tensorflow/tensorflow/python/ops/check_ops.py", line 318, in assert_equal _assert_static(condition_static, data) File "/private/var/tmp/_bazel_username_0/84f257278b69232e5174bb7ca894d15f/execroot/__main__/bazel-out/darwin_x86_64-opt/bin/dragnn/python/graph_builder_test.runfiles/org_tensorflow/tensorflow/python/ops/check_ops.py", line 101, in _assert_static raise ValueError('\n'.join(data_static)) ValueError: Condition x == y did not hold element-wise: x (parser/TrainingBeamSize:0) = 8 y (train-testFullInference-train-simple-parser/cond/assert_equal/y:0) = 1 ---------------------------------------------------------------------- Ran 19 tests in 51.583s FAILED (errors=1) ``` <issue_comment>username_1: /CC @calberti, @andorardo <issue_comment>username_2: I am getting the same error too OSX <issue_comment>username_3: I get the same error on Ubuntu 16.04 <issue_comment>username_4: I am getting the same error on Sierra 10.13. Does someone know how to fix it? <issue_comment>username_5: Hi There, We are checking to see if you still need help on this, as this seems to be considerably old issue. Please update this issue with the latest information, code snippet to reproduce your issue and error you are seeing. If we don't hear from you in the next 7 days, this issue will be closed automatically. If you don't need help on this issue any more, please consider closing this.<issue_closed>
{'fraction_non_alphanumeric': 0.14563210989482722, 'fraction_numerical': 0.0767332045503327, 'mean_word_length': 5.435773480662983, 'pattern_counts': {'":': 0, '<': 9, '<?xml version=': 0, '>': 9, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '17080401', 'n_tokens_mistral': 3597, 'n_tokens_neox': 3123, 'n_words': 702}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: in_array (Line 49) username_0: I keep getting this error on line 49 of the navwalker.... navwalker.php on line 49 Warning : in_array() expects parameter 2 to be array, string given in <issue_comment>username_1: Following. Can't seem to figure this one out <issue_comment>username_2: What it worked for me is be very cautious with the menu name. I mean: `require_once('navwalker.php'); register_nav_menus( array( 'primary' => __( 'Primary Menu', 'menuname' ), // This name has to be the same that the one here *1 ) );` `wp_nav_menu( array( 'theme_location' => 'primary', // *1 'depth' => 2, 'container' => false, // 'items_wrap' => 'div', 'menu_class' => 'navbar-menu', 'menu_id' => 'primary-menu', 'after' => "</div>", 'walker' => new Navwalker()) `);`` I was getting that error because I had registered the menu in other line with a different name. <issue_comment>username_3: Having the same problem ..... Any solution ? <issue_comment>username_4: FIxed the issue. Check https://github.com/username_4/Bulma-Navwalker <issue_comment>username_5: I changed the theme_location and it is working better, still need to do some adjustments but the error was gone. ``` wp_nav_menu( array( 'theme_location' => 'primary', 'depth' => 2, ... <- changed to -> wp_nav_menu( array( 'theme_location' => 'menu-1', 'depth' => 2, ... <issue_comment>username_6: Can I pull request not be made for this fix? <issue_comment>username_7: Finally got back to programming. I've successfully reproduced the problem and have come up with a solution, which is by updating the readme as I've skipped a step last time I did this. Doing the first step should fix the problem. Update me otherwise.<issue_closed>
{'fraction_non_alphanumeric': 0.10582278481012658, 'fraction_numerical': 0.011139240506329114, 'mean_word_length': 2.586206896551724, 'pattern_counts': {'":': 0, '<': 12, '<?xml version=': 0, '>': 25, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '19160610', 'n_tokens_mistral': 595, 'n_tokens_neox': 562, 'n_words': 217}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Use Feature Gates to enable or disable features username_0: There are a number of features that are being developed and likely to be disabled by default in their early stage. Instead of adding a temporary config for each feature and maintaining them separately, this patch introduces Feature Gates to toggle the features. It will be easier to choose code branch based on FeatureGates' "Enabled" method and to promote features to beta and GA. Closes #846 <issue_comment>username_1: One more question: will this bring gates flags, like `--featureGates=ClusterPolicy=true` ? <issue_comment>username_0: No, the library supports it but I didn't add that CLI flag as we use config file. <issue_comment>username_0: /test-all <issue_comment>username_0: /test-windows-conformance <issue_comment>username_2: Is there a way to reduce the per-feature overhead of changing yamls->featureset->feature? On every new feature there are quite a lot of files to be changed, then reviewed merged etc and would be nice to have this more compact. IDK if this is either possible and would rather stick to some standard. <issue_comment>username_0: Perhaps the feature list can be removed from yamls, I added it for convenience to edit, but we should have a doc to list experimental features and their stage anyway so user can refer to that doc to configure features. For the change in `antrea_features.go`, I don't think it can be more compact than it is. Currently it needs two lines of code, one for feature name, one for its default value and stage. <issue_comment>username_0: /test-all /skip-whole-conformance <issue_comment>username_0: /test-all <issue_comment>username_0: /test-windows-conformance <issue_comment>username_0: /test-windows-conformance <issue_comment>username_3: /test-windows-conformance <issue_comment>username_0: @antoninbas Sure, let's discuss whether antrea service proxy should be an experimental feature or an optional feature in #772
{'fraction_non_alphanumeric': 0.05435329642677403, 'fraction_numerical': 0.009562154001006542, 'mean_word_length': 5.231974921630094, 'pattern_counts': {'":': 0, '<': 14, '<?xml version=': 0, '>': 16, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '11477777', 'n_tokens_mistral': 528, 'n_tokens_neox': 514, 'n_words': 268}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: PowerShell .NET SDK Upgrade With V2020-02-02 ShortTermRetention (STR) of SQL DB. username_0: Generate new version of .NET SDK for new API V2020-02-02-Preview ShortTermRetentionPolicy. 1) Related Unit tests don't passed since backend development not fully finished and API not deployed to live stage. 2) This PR read STR API swagger from feature branch rather than master. **TODO:** 1. After SQL DB API specification (https://github.com/Azure/azure-rest-api-specs/pull/9407) merged into master, re-generate clients. (../src/Generated...) 2. After API deployed to one stage, re-run unit tests and re-copy paste test sessions. (../tests/SessionRecords/ShortTermRetentionTests/..) 3. Sync with latest master and update new version + 1. (.../Microsoft.Azure.Management.Sql.csproj, .../AssemblyInfo.cs) **We want to start code review early before above TODO items completed.** So please ignore TODO items for now and leave comments on this PR. You don't have to approve at this moment. <issue_comment>username_0: @username_1 The auto generate cmd can generate more unrelated changes. Some files I can don't it but some files include both mine and other unrelated changes, then what should I do? // <auto-generated> // Code generated by Microsoft (R) AutoRest Code Generator. // Changes may cause incorrect behavior and will be lost if the code is // regenerated. // </auto-generated> Last time since some changes by manually fix since I know it is not generated from master. But now, should I include all changes or do manually fix? <issue_comment>username_1: @username_0 never edit generated code. Basically, if you wish to control what are generated and what are not, edit the tag defined in [readme.md](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/sql/resource-manager/readme.md). I suggest you talk to who contributed to last release, and define what exactly you want to put in the SDK. (Removing an operation is a breaking change, and we want to avoid breaking change if possible, even for a preview package). <issue_comment>username_0: I edit my change to readme.md, see: Tag: package-composite-v3 - Microsoft.Sql/preview/2020-02-02-preview/shortTermRetentionPolicies.json, I generate client files via master branch of azure-rest-api-specs so does there are many un-related clients generated. How can I avoid them? <issue_comment>username_0: Just verified with Alicia (released PS .net sdk last year), as she said it is ok to leave un-related changes but don't manually fix auto generated files. I'll update in next push. <issue_comment>username_0: Why this[ PR](https://github.com/Azure/azure-sdk-for-net/pull/14717/files?file-filters%5B%5D=.csproj&file-filters%5B%5D=.txt) can only include related changes of file SqlManagementClient.cs? When I run cmd to auto generate, there are many changes not related with the new API I added. I don't think it make sense to include many other swagger changes to my PR. I double checked few other PRs related recently, they only include their changes. He updated the version from 1.44.0.0 to 1.44.1.0 Should I update version to 1.45 or 1.44.2.0? What difference? <issue_comment>username_0: This PR has expected build error because of the Sql.Tests.ShortTermRetentionTests.TestShortTermRetentionPolicy failed. Will come back to update this PR once live API deployed. <issue_comment>username_2: Totally agreed. We need a major clean up. Also, I used 1.44.1-preview as it was suggested in the Request Changes comments. <issue_comment>username_3: @username_0 Please fix merge conflicts, we can't review till this branch is up to date. <issue_comment>username_0: We (server side) are waiting for the backend code deployed to cluster Stage to unblock scenario tests otherwise this PR still gonna be blocked. Will keep updated this PR once we got a live API and fix no matter conflict or new versions at that time. Thanks a lot guys. <issue_comment>username_4: @username_0 I am going to close this PR. You can raise a new PR once the service-side issues are resolved. <issue_comment>username_0: Can you reopen this PR? The backend code changes deployed to Stage already. @username_4
{'fraction_non_alphanumeric': 0.07046583116576023, 'fraction_numerical': 0.019626389217309057, 'mean_word_length': 4.2875, 'pattern_counts': {'":': 0, '<': 15, '<?xml version=': 0, '>': 15, 'https://': 3, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 3, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '3104953', 'n_tokens_mistral': 1191, 'n_tokens_neox': 1131, 'n_words': 574}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: display() error username_0: before: ``` julia> collect(()) 0-element Array{Union{},1} ``` after using Contour: ``` julia> using Contour julia> collect(()) Error showing value of type Array{Union{},1}: ERROR: MethodError: no method matching display(::Array{Union{},1}) Closest candidates are: display(::Any) at multimedia.jl:320 display(::AbstractDisplay, ::AbstractString, ::Any) at multimedia.jl:214 display(::AbstractString, ::Any) at multimedia.jl:215 ... Stacktrace: [1] display(::Any) at ./multimedia.jl:330 [2] #invokelatest#1 at ./essentials.jl:712 [inlined] [3] invokelatest at ./essentials.jl:711 [inlined] [4] print_response(::IO, ::Any, ::Bool, ::Bool, ::Any) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:161 [5] print_response(::REPL.AbstractREPL, ::Any, ::Bool, ::Bool) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:146 [6] (::REPL.var"#do_respond#38"{Bool,REPL.var"#48#57"{REPL.LineEditREPL,REPL.REPLHistoryProvider},REPL.LineEditREPL,REPL.LineEdit.Prompt})(::Any, ::Any, ::Any) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:729 [7] #invokelatest#1 at ./essentials.jl:712 [inlined] [8] invokelatest at ./essentials.jl:711 [inlined] [9] run_interface(::REPL.Terminals.TextTerminal, ::REPL.LineEdit.ModalInterface, ::REPL.LineEdit.MIState) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.4/REPL/src/LineEdit.jl:2354 [10] run_frontend(::REPL.LineEditREPL, ::REPL.REPLBackendRef) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:1055 [11] run_repl(::REPL.AbstractREPL, ::Any) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.4/REPL/src/REPL.jl:206 [12] (::Base.var"#764#766"{Bool,Bool,Bool,Bool})(::Module) at ./client.jl:383 [13] #invokelatest#1 at ./essentials.jl:712 [inlined] [14] invokelatest at ./essentials.jl:711 [inlined] [15] run_main_repl(::Bool, ::Bool, ::Bool, ::Bool, ::Bool) at ./client.jl:367 [16] exec_options(::Base.JLOptions) at ./client.jl:305 [17] _start() at ./client.jl:484 ``` from GiovineItalia/Gadfly.jl#1460<issue_closed> <issue_comment>username_1: Thanks for the bug report and apologies for the delay. This should be fixed in the 0.5.4 release which has just been tagged.
{'fraction_non_alphanumeric': 0.17805466237942122, 'fraction_numerical': 0.055868167202572344, 'mean_word_length': 5.2537688442211055, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 7, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '11962714', 'n_tokens_mistral': 1107, 'n_tokens_neox': 988, 'n_words': 162}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: No explanation created with automl username_0: Hi, I am using automl for 'forecast' with 'model_explainability'=True The log states: 'Current status: BestRunExplainModel. Best run model explanations started' ...however no explanations are created. Thank you and best regards, Robert --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 8aec7bd5-0f42-7a2b-31bc-331fddc49e7a * Version Independent ID: e851a806-59b5-c716-4e5b-cc7d421104a4 * Content: [Model interpretability in automated machine learning - Azure Machine Learning](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability-automl#feedback) * Content Source: [articles/machine-learning/how-to-machine-learning-interpretability-automl.md](https://github.com/Microsoft/azure-docs/blob/master/articles/machine-learning/how-to-machine-learning-interpretability-automl.md) * Service: **machine-learning** * Sub-service: **core** * GitHub Login: @mesameki * Microsoft Alias: **mesameki** <issue_comment>username_1: @username_0 Thanks for the question! We are investigating and will update you shortly. <issue_comment>username_2: @username_0 Could you please let us know if there are any explanations visible on the portal against your run? ![image](https://user-images.githubusercontent.com/46958063/73917780-1c23af00-48e6-11ea-908c-d23ab623c053.png) We have run this [notebook ](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/regression-hardware-performance-explanation-and-featurization/auto-ml-regression-hardware-performance-explanation-and-featurization.ipynb)from Machine Learning repo which seems to have generated an explanation as seen above. Is it possible to check your notebook cell configuration with this notebook and correct it if necessary? <issue_comment>username_0: Hi, not in the portal either. <issue_comment>username_0: Hi, I just run the notebook auto-ml-regression-hardware-performance-explanation-and-featurization.ipynb <https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/regression-hardware-performance-explanation-and-featurization/auto-ml-regression-hardware-performance-explanation-and-featurization.ipynb> ... it does not create any explanations in the portal. [image: image.png] <issue_comment>username_0: ok, I just found out that explanations are created on the remote compute target but not on the local compute instance. Is that a known behavior? Thank you and best regards, Robert <issue_comment>username_2: @username_0 Explanations are designed to work for [remote and local](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability#local-and-remote-compute-target) compute. You can try to use this sample [notebook ](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/explain-model/azure-integration/scoring-time/train-explain-model-locally-and-deploy.ipynb)for checking local explanations.<issue_closed> <issue_comment>username_2: @username_0 We hope local compute explanations are available after using the setup in sample notebook. We will now proceed to close this thread. If there are further questions regarding this matter, please tag @username_2 in your reply.
{'fraction_non_alphanumeric': 0.09663250366032211, 'fraction_numerical': 0.02576866764275256, 'mean_word_length': 5.698039215686275, 'pattern_counts': {'":': 0, '<': 11, '<?xml version=': 0, '>': 11, 'https://': 7, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '25810710', 'n_tokens_mistral': 1075, 'n_tokens_neox': 989, 'n_words': 277}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: add all wheels option to pip download Fixes #4422 username_0: <issue_comment>username_1: This PR does not implement #4422 as `pip download` is still downloading a single file. It is apparently aimed at making `PackageFinder` more general to allow an easier unsupported use in piptools. <issue_comment>username_2: I just closed #4422 under the rationale that this isn't something that is a common enough use case for pip to support. Further, this PR does not add the functionality as requested in #4422. I'm closing this. Feel free to re-open this PR if you feel otherwise.
{'fraction_non_alphanumeric': 0.049019607843137254, 'fraction_numerical': 0.03104575163398693, 'mean_word_length': 4.894230769230769, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 3, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '23289512', 'n_tokens_mistral': 171, 'n_tokens_neox': 157, 'n_words': 92}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: Creating JWS with empty "" payloads username_0: Hello, I'm confronted to an API that uses JWS on their GET requests meaning no body. There is a check in the JWT.Encode method preventing this behavior. Could the check for an empty payload be removed ? It seems to be a rather recent developement (~1 year) that can be seen in this rfc https://tools.ietf.org/id/draft-ietf-acme-acme-16.html#rfc.section.6.3. I'm currently using .NET standard 4.6.2 with jose 2.5.0 Thanks for the work, <issue_comment>username_1: Hi @username_0 , are you asking for detached payload (it is supported https://github.com/username_1/jose-jwt/#unencoded-and-detached-content-aka-rfc-7797) or how would you expect it work? In a sense if you don't provide body, what you expect signature to be based off? Just headers? <issue_comment>username_0: Hello @username_1, Yes the signature is based solely on the headers. I'm integrating a API that uses JWS signature with detached payload. This works flawlessly with POST since there are bodies, but it has a few GET/DELETE requests with blank payloads. With curl, these requests would look like this: curl -X DELETE https://myapi.com -H 'Signature: xxxxxxxx..xxxxxxxxxxxx' Thanks for taking the time to answer, <issue_comment>username_1: Hm.. ok, let me check how others jwt implementations addressing it (empty payload) that i'm usually cross-test against. Seems important to be compatible here. <issue_comment>username_0: I've looked at it before creating this issue, to be sure I wasn't the only one with this demand. This library creates an extra parameter allowing, or not, blank payloads. It is false by default. https://github.com/nov/json-jwt/issues/69 <issue_comment>username_1: ok, interesting. Let me dig others too :) Will get back to you. <issue_comment>username_1: @username_0 , checked other libs - looks like all supporting it as valid use case. So don't see reasons why not to update jose-jwt too. Will make a fix. <issue_comment>username_0: For other people having the same issues, you can create an empty payload in bytes ``` var emptypayload = Encoding.UTF8.GetBytes(string.Empty); newJws = JWT.EncodeBytes(emptypayload, ecdsapriv, JwsAlgorithm.ES256, headers.data, null, new JwtOptions { DetachPayload = true }); ``` <issue_comment>username_1: That reminds me i need to make a release 🀦 <issue_comment>username_1: Released: https://github.com/username_1/jose-jwt/releases/tag/v2.6.0 https://www.nuget.org/packages/jose-jwt/2.6.0 @username_0 check it out and sorry for delay.<issue_closed>
{'fraction_non_alphanumeric': 0.0803197563760944, 'fraction_numerical': 0.016368481157213552, 'mean_word_length': 4.1936758893280635, 'pattern_counts': {'":': 0, '<': 12, '<?xml version=': 0, '>': 12, 'https://': 6, 'lorem ipsum': 0, 'www.': 1, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 2, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '7719798', 'n_tokens_mistral': 832, 'n_tokens_neox': 777, 'n_words': 331}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: No icon in ModMenu username_0: This is less of a bug and more of a feature request, but I will include it anyway: There is no icon in Modmenu, even though all other mods have one. You can compare to the mods on either side in the attached image. ![image](https://user-images.githubusercontent.com/68545280/101952375-7a997800-3bf0-11eb-9053-2e9769d85420.png) <issue_comment>username_1: There was actually an icon, but it was black text on a transparent background, so you couldn't see shit. I've added a better icon. (Although it does kind of look shit in modmenu :/)<issue_closed>
{'fraction_non_alphanumeric': 0.07281553398058252, 'fraction_numerical': 0.07119741100323625, 'mean_word_length': 4.785046728971962, 'pattern_counts': {'":': 0, '<': 4, '<?xml version=': 0, '>': 4, 'https://': 1, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 1, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '24654841', 'n_tokens_mistral': 209, 'n_tokens_neox': 183, 'n_words': 86}
starcoder-github-issues-filtered-structured
<issue_start><issue_comment>Title: data: Add (old) Logitech Performance Mouse MX username_0: Add support for this old business Logitech mouse. Hint: I'm new to libratbag, so sorry if there is something missing. <issue_comment>username_1: Actually, sorry. I misread the USB ID. This ID belongs to the receiver, we need to add support to the kernel driver first.
{'fraction_non_alphanumeric': 0.06060606060606061, 'fraction_numerical': 0.005509641873278237, 'mean_word_length': 4.967213114754099, 'pattern_counts': {'":': 0, '<': 3, '<?xml version=': 0, '>': 3, 'https://': 0, 'lorem ipsum': 0, 'www.': 0, 'xml': 0}, 'pii_count': 0, 'substrings_counts': 0, 'word_list_counts': {'cursed_substrings.json': 0, 'profanity_word_list.json': 0, 'sexual_word_list.json': 0, 'zh_pornsignals.json': 0}}
{'dir': 'github-issues-filtered-structured', 'id': '18710138', 'n_tokens_mistral': 103, 'n_tokens_neox': 98, 'n_words': 53}