title
stringlengths 2
169
| diff
stringlengths 235
19.5k
| body
stringlengths 0
30.5k
| url
stringlengths 48
84
| created_at
stringlengths 20
20
| closed_at
stringlengths 20
20
| merged_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| diff_len
float64 101
3.99k
| repo_name
stringclasses 83
values | __index_level_0__
int64 15
52.7k
|
---|---|---|---|---|---|---|---|---|---|---|
Disable quantum/quantum_random.py (attempt 2) | diff --git a/DIRECTORY.md b/DIRECTORY.md
index 29514579ceb0..af150b12984b 100644
--- a/DIRECTORY.md
+++ b/DIRECTORY.md
@@ -1063,7 +1063,6 @@
* [Q Fourier Transform](quantum/q_fourier_transform.py)
* [Q Full Adder](quantum/q_full_adder.py)
* [Quantum Entanglement](quantum/quantum_entanglement.py)
- * [Quantum Random](quantum/quantum_random.py)
* [Quantum Teleportation](quantum/quantum_teleportation.py)
* [Ripple Adder Classic](quantum/ripple_adder_classic.py)
* [Single Qubit Measure](quantum/single_qubit_measure.py)
diff --git a/quantum/quantum_random.py b/quantum/quantum_random.py.DISABLED.txt
similarity index 100%
rename from quantum/quantum_random.py
rename to quantum/quantum_random.py.DISABLED.txt
| ### Describe your change:
Temporarily disable [quantum/quantum_random.py](https://github.com/tianyizheng02/Python/blob/642c88548815e6bc77c388f3cf1b0459c3808104/quantum/quantum_random.py.DISABLED.txt) because it produces an illegal instruction error that causes all builds to fail (see #8899)
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [ ] All filenames are in all lowercase characters with no spaces or dashes.
* [ ] All functions and variable names follow Python naming conventions.
* [ ] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [ ] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [ ] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| https://api.github.com/repos/TheAlgorithms/Python/pulls/8902 | 2023-07-28T20:01:34Z | 2023-07-28T20:08:40Z | 2023-07-28T20:08:40Z | 2023-07-30T09:30:26Z | 233 | TheAlgorithms/Python | 30,196 |
replace `werkzeug.urls` with `urllib.parse` | diff --git a/CHANGES.rst b/CHANGES.rst
index cd1a04c489..a8762477eb 100644
--- a/CHANGES.rst
+++ b/CHANGES.rst
@@ -1,3 +1,11 @@
+Version 2.2.4
+-------------
+
+Unreleased
+
+- Update for compatibility with Werkzeug 2.3.
+
+
Version 2.2.3
-------------
diff --git a/src/flask/app.py b/src/flask/app.py
index ff6b097382..d904d6ba38 100644
--- a/src/flask/app.py
+++ b/src/flask/app.py
@@ -11,6 +11,7 @@
from itertools import chain
from threading import Lock
from types import TracebackType
+from urllib.parse import quote as _url_quote
import click
from werkzeug.datastructures import Headers
@@ -27,7 +28,6 @@
from werkzeug.routing import RoutingException
from werkzeug.routing import Rule
from werkzeug.serving import is_running_from_reloader
-from werkzeug.urls import url_quote
from werkzeug.utils import redirect as _wz_redirect
from werkzeug.wrappers import Response as BaseResponse
@@ -2034,7 +2034,8 @@ def url_for(
return self.handle_url_build_error(error, endpoint, values)
if _anchor is not None:
- rv = f"{rv}#{url_quote(_anchor)}"
+ _anchor = _url_quote(_anchor, safe="%!#$&'()*+,/:;=?@")
+ rv = f"{rv}#{_anchor}"
return rv
diff --git a/src/flask/testing.py b/src/flask/testing.py
index 3b21b093fb..8cb2d1bd94 100644
--- a/src/flask/testing.py
+++ b/src/flask/testing.py
@@ -3,11 +3,11 @@
from contextlib import ExitStack
from copy import copy
from types import TracebackType
+from urllib.parse import urlsplit
import werkzeug.test
from click.testing import CliRunner
from werkzeug.test import Client
-from werkzeug.urls import url_parse
from werkzeug.wrappers import Request as BaseRequest
from .cli import ScriptInfo
@@ -68,7 +68,7 @@ def __init__(
if url_scheme is None:
url_scheme = app.config["PREFERRED_URL_SCHEME"]
- url = url_parse(path)
+ url = urlsplit(path)
base_url = (
f"{url.scheme or url_scheme}://{url.netloc or http_host}"
f"/{app_root.lstrip('/')}"
| For compatibility with Werkzeug 2.3, which deprecates `werkzeug.urls`. | https://api.github.com/repos/pallets/flask/pulls/5026 | 2023-03-11T16:20:06Z | 2023-03-11T16:21:58Z | 2023-03-11T16:21:58Z | 2023-03-26T00:06:10Z | 579 | pallets/flask | 19,959 |
new library and section additions | diff --git a/README.md b/README.md
index bf5cd490..a3e70ade 100644
--- a/README.md
+++ b/README.md
@@ -54,6 +54,9 @@ Further resources:
- [Natural Language Processing](#natural-language-processing-2)
- [Erlang](#erlang)
- [General-Purpose Machine Learning](#general-purpose-machine-learning-7)
+ - [Fortran](#fortran)
+ - [General-Purpose Machine Learning](#fortran-general-purpose-machine-learning)
+ - [Data Analysis / Data Visualization](#fortran-data-analysis-visualization)
- [Go](#go)
- [Natural Language Processing](#natural-language-processing-3)
- [General-Purpose Machine Learning](#general-purpose-machine-learning-8)
@@ -223,6 +226,7 @@ Further resources:
* [MLDB](https://mldb.ai) - The Machine Learning Database is a database designed for machine learning. Send it commands over a RESTful API to store data, explore it using SQL, then train machine learning models and expose them as APIs.
* [mlpack](https://www.mlpack.org/) - A scalable C++ machine learning library.
* [MXNet](https://github.com/apache/incubator-mxnet) - Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Go, Javascript and more.
+* [ParaMonte](https://github.com/cdslaborg/paramonte) - A general-purpose library with C/C++ interface for Bayesian data analysis and visualization via serial/parallel Monte Carlo and MCMC simulations. Documentation can be found [here](https://www.cdslab.org/paramonte/).
* [proNet-core](https://github.com/cnclabs/proNet-core) - A general-purpose network embedding framework: pair-wise representations optimization Network Edit.
* [PyCUDA](https://mathema.tician.de/software/pycuda/) - Python interface to CUDA
* [ROOT](https://root.cern.ch) - A modular scientific software framework. It provides all the functionalities needed to deal with big data processing, statistical analysis, visualization and storage.
@@ -346,6 +350,20 @@ Further resources:
* [Disco](https://github.com/discoproject/disco/) - Map Reduce in Erlang. **[Deprecated]**
* [Yanni](https://bitbucket.org/nato/yanni/overview) - ANN neural networks using Erlangs leightweight processes.
+<a name="fortran"></a>
+## Fortran
+
+<a name="fortran-general-purpose-machine-learning"></a>
+#### General-Purpose Machine Learning
+
+* [neural-fortran](https://github.com/modern-fortran/neural-fortran) - A parallel neural net microframework.
+Read the paper [here](https://arxiv.org/abs/1902.06714).
+
+<a name="fortran-data-analysis-visualization"></a>
+#### Data Analysis / Data Visualization
+
+* [ParaMonte](https://github.com/cdslaborg/paramonte) - A general-purpose Fortran library for Bayesian data analysis and visualization via serial/parallel Monte Carlo and MCMC simulations. Documentation can be found [here](https://www.cdslab.org/paramonte/).
+
<a name="go"></a>
## Go
@@ -796,6 +814,7 @@ on MNIST digits[DEEP LEARNING].
<a name="matlab-data-analysis"></a>
#### Data Analysis / Data Visualization
+* [ParaMonte](https://github.com/cdslaborg/paramonte) - A general-purpose MATLAB library for Bayesian data analysis and visualization via serial/parallel Monte Carlo and MCMC simulations. Documentation can be found [here](https://www.cdslab.org/paramonte/).
* [matlab_bgl](https://www.cs.purdue.edu/homes/dgleich/packages/matlab_bgl/) - MatlabBGL is a Matlab package for working with graphs.
* [gaimc](https://www.mathworks.com/matlabcentral/fileexchange/24134-gaimc---graph-algorithms-in-matlab-code) - Efficient pure-Matlab implementations of graph algorithms to complement MatlabBGL's mex functions.
@@ -1122,6 +1141,7 @@ be
* [NetworkX](https://networkx.github.io/) - A high-productivity software for complex networks.
* [igraph](https://igraph.org/python/) - binding to igraph library - General purpose graph library.
* [Pandas](https://pandas.pydata.org/) - A library providing high-performance, easy-to-use data structures and data analysis tools.
+* [ParaMonte](https://github.com/cdslaborg/paramonte) - A general-purpose Python library for Bayesian data analysis and visualization via serial/parallel Monte Carlo and MCMC simulations. Documentation can be found [here](https://www.cdslab.org/paramonte/).
* [Open Mining](https://github.com/mining/mining) - Business Intelligence (BI) in Python (Pandas web interface) **[Deprecated]**
* [PyMC](https://github.com/pymc-devs/pymc) - Markov Chain Monte Carlo sampling toolkit.
* [zipline](https://github.com/quantopian/zipline) - A Pythonic algorithmic trading library.
| I have added a new "Fortran libraries" section to this list and included two Fortran machine learning packages. In addition, the [ParaMonte library](https://github.com/cdslaborg/paramonte) in multiple different languages is also listed in the appropriate sections. | https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/722 | 2020-09-20T07:30:15Z | 2020-09-21T14:13:54Z | 2020-09-21T14:13:54Z | 2020-09-21T14:13:54Z | 1,165 | josephmisiti/awesome-machine-learning | 52,350 |
🔧 Update sponsors, add Reflex | diff --git a/README.md b/README.md
index aeb29b5874e55..06c0c44522b74 100644
--- a/README.md
+++ b/README.md
@@ -51,6 +51,7 @@ The key features are:
<a href="https://www.buildwithfern.com/?utm_source=tiangolo&utm_medium=website&utm_campaign=main-badge" target="_blank" title="Fern | SDKs and API docs"><img src="https://fastapi.tiangolo.com/img/sponsors/fern.svg"></a>
<a href="https://www.porter.run" target="_blank" title="Deploy FastAPI on AWS with a few clicks"><img src="https://fastapi.tiangolo.com/img/sponsors/porter.png"></a>
<a href="https://bump.sh/fastapi?utm_source=fastapi&utm_medium=referral&utm_campaign=sponsor" target="_blank" title="Automate FastAPI documentation generation with Bump.sh"><img src="https://fastapi.tiangolo.com/img/sponsors/bump-sh.svg"></a>
+<a href="https://reflex.dev" target="_blank" title="Reflex"><img src="https://fastapi.tiangolo.com/img/sponsors/reflex.png"></a>
<a href="https://www.deta.sh/?ref=fastapi" target="_blank" title="The launchpad for all your (team's) ideas"><img src="https://fastapi.tiangolo.com/img/sponsors/deta.svg"></a>
<a href="https://training.talkpython.fm/fastapi-courses" target="_blank" title="FastAPI video courses on demand from people you trust"><img src="https://fastapi.tiangolo.com/img/sponsors/talkpython.png"></a>
<a href="https://testdriven.io/courses/tdd-fastapi/" target="_blank" title="Learn to build high-quality web apps with best practices"><img src="https://fastapi.tiangolo.com/img/sponsors/testdriven.svg"></a>
diff --git a/docs/en/data/sponsors.yml b/docs/en/data/sponsors.yml
index dac47d2f07034..f2e07cbd8e194 100644
--- a/docs/en/data/sponsors.yml
+++ b/docs/en/data/sponsors.yml
@@ -14,6 +14,9 @@ gold:
- url: https://bump.sh/fastapi?utm_source=fastapi&utm_medium=referral&utm_campaign=sponsor
title: Automate FastAPI documentation generation with Bump.sh
img: https://fastapi.tiangolo.com/img/sponsors/bump-sh.svg
+ - url: https://reflex.dev
+ title: Reflex
+ img: https://fastapi.tiangolo.com/img/sponsors/reflex.png
silver:
- url: https://www.deta.sh/?ref=fastapi
title: The launchpad for all your (team's) ideas
diff --git a/docs/en/data/sponsors_badge.yml b/docs/en/data/sponsors_badge.yml
index acbcc220567d7..43b69bf003cda 100644
--- a/docs/en/data/sponsors_badge.yml
+++ b/docs/en/data/sponsors_badge.yml
@@ -21,3 +21,4 @@ logins:
- fern-api
- ndimares
- svixhq
+ - Alek99
diff --git a/docs/en/docs/img/sponsors/reflex-banner.png b/docs/en/docs/img/sponsors/reflex-banner.png
new file mode 100644
index 0000000000000..3095c3a7b4090
Binary files /dev/null and b/docs/en/docs/img/sponsors/reflex-banner.png differ
diff --git a/docs/en/docs/img/sponsors/reflex.png b/docs/en/docs/img/sponsors/reflex.png
new file mode 100644
index 0000000000000..59c46a1104140
Binary files /dev/null and b/docs/en/docs/img/sponsors/reflex.png differ
diff --git a/docs/en/overrides/main.html b/docs/en/overrides/main.html
index 4c7f19fd4dc95..ed08028ec6141 100644
--- a/docs/en/overrides/main.html
+++ b/docs/en/overrides/main.html
@@ -52,6 +52,12 @@
<img class="sponsor-image" src="/img/sponsors/bump-sh-banner.svg" />
</a>
</div>
+ <div class="item">
+ <a title="Reflex" style="display: block; position: relative;" href="https://reflex.dev" target="_blank">
+ <span class="sponsor-badge">sponsor</span>
+ <img class="sponsor-image" src="/img/sponsors/reflex-banner.png" />
+ </a>
+ </div>
</div>
</div>
{% endblock %}
| 🔧 Update sponsors, add Reflex | https://api.github.com/repos/tiangolo/fastapi/pulls/10676 | 2023-11-18T13:34:23Z | 2023-11-18T13:38:01Z | 2023-11-18T13:38:01Z | 2023-11-18T13:38:02Z | 1,062 | tiangolo/fastapi | 23,482 |
Upgrade to transformers==4.28 | diff --git a/README.md b/README.md
index 7264d25cbe..511c29edfc 100644
--- a/README.md
+++ b/README.md
@@ -27,11 +27,7 @@ Join our [Discord](https://discord.gg/h6kCZb72G7) server and follow our [Twitter
### Method 1: With pip
```bash
-# Install FastChat
pip3 install fschat
-
-# Install the latest main branch of huggingface/transformers
-pip3 install git+https://github.com/huggingface/transformers
```
### Method 2: From source
@@ -61,7 +57,7 @@ You can add our delta to the original LLaMA weights to obtain the Vicuna weights
2. Use the following scripts to get Vicuna weights by applying our delta. They will automatically download delta weights from our Hugging Face [account](https://huggingface.co/lmsys).
**NOTE**:
-Weights v1.1 are only compatible with the latest main branch of huggingface/transformers and ``fschat >= 0.2.0``.
+Weights v1.1 are only compatible with ```transformers>=4.28.0``` and ``fschat >= 0.2.0``.
Please update your local packages accordingly. If you follow the above commands to do a fresh install, then you should get all the correct versions.
### Vicuna-7B
@@ -90,8 +86,8 @@ See [docs/weights_version.md](docs/weights_version.md) for all versions of weigh
### Low CPU Memory Conversion
You can try these methods to reduce the CPU RAM requirement of weight conversion.
-1. You can append "--low-cpu-mem" to the commands above, which will split large weight files into smaller ones and use the disk as temporary storage. This can keep the peak memory at less than 16GB.
-2. You can create a large swap file and rely on the operating system to automatically utilize the disk as virtual memory.
+1. Append `--low-cpu-mem` to the commands above, which will split large weight files into smaller ones and use the disk as temporary storage. This can keep the peak memory at less than 16GB.
+2. Create a large swap file and rely on the operating system to automatically utilize the disk as virtual memory.
## Inference with Command Line Interface
diff --git a/pyproject.toml b/pyproject.toml
index 01ae14290a..160e8eef91 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -15,8 +15,8 @@ classifiers = [
dependencies = [
"accelerate", "fastapi", "gradio==3.23", "markdown2[all]", "numpy",
"prompt_toolkit>=3.0.0", "requests", "rich>=10.0.0", "sentencepiece",
- "shortuuid", "tokenizers>=0.12.1", "torch", "uvicorn", "wandb",
- "transformers @ git+https://github.com/huggingface/transformers.git"
+ "shortuuid", "transformers>=4.28.0", "tokenizers>=0.12.1", "torch",
+ "uvicorn", "wandb",
]
[project.urls]
| https://api.github.com/repos/lm-sys/FastChat/pulls/449 | 2023-04-16T13:20:58Z | 2023-04-16T13:21:38Z | 2023-04-16T13:21:38Z | 2023-04-16T13:21:44Z | 735 | lm-sys/FastChat | 41,170 |
|
Adding toml as dependency | diff --git a/pyproject.toml b/pyproject.toml
index 987aadf5a3..b5fc76c3fe 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -28,6 +28,7 @@ tiktoken = ">=0.0.4"
tabulate = "0.9.0"
python-dotenv = ">=0.21.0"
langchain = ">=0.0.335"
+toml = ">=0.10.2"
[tool.poetry.group.dev.dependencies]
pytest = ">=7.3.1"
@@ -72,7 +73,7 @@ gpte_test_application = 'tests.caching_main:app'
[tool.poetry.extras]
test = ["pytest", "pytest-cov"]
-doc = ["autodoc_pydantic", "myst_parser", "nbsphinx", "sphinx", "sphinx-autobuild", "sphinx_book_theme", "sphinx_rtd_theme", "sphinx-typlog-theme", "sphinx-panels", "toml", "myst-nb", "linkchecker", "sphinx-copybutton", "markdown-include", "sphinx_copybutton"]
+doc = ["autodoc_pydantic", "myst_parser", "nbsphinx", "sphinx", "sphinx-autobuild", "sphinx_book_theme", "sphinx_rtd_theme", "sphinx-typlog-theme", "sphinx-panels", "myst-nb", "linkchecker", "sphinx-copybutton", "markdown-include", "sphinx_copybutton"]
experimental = ["llama-index", "rank-bm25", "tree_sitter_languages"]
[tool.ruff]
| This fixes the docker image not having the toml dependency. Error most likely introduced here https://github.com/gpt-engineer-org/gpt-engineer/pull/937/files#diff-56bcc43fa5663093cbd7d862ed7849d2e30b13257faed00e09513d88d93d8c45R27
```Traceback (most recent call last):
File "/usr/local/bin/gpt-engineer", line 5, in <module>
from gpt_engineer.applications.cli.main import app
File "/app/gpt_engineer/applications/cli/main.py", line 40, in <module>
from gpt_engineer.applications.cli.file_selector import FileSelector
File "/app/gpt_engineer/applications/cli/file_selector.py", line 27, in <module>
import toml
ModuleNotFoundError: No module named 'toml'
``` | https://api.github.com/repos/gpt-engineer-org/gpt-engineer/pulls/961 | 2024-01-09T21:09:12Z | 2024-01-10T19:10:44Z | 2024-01-10T19:10:44Z | 2024-01-10T19:10:44Z | 375 | gpt-engineer-org/gpt-engineer | 33,357 |
Fix issue with single quotes in payload for NodeJS lambda in local execution mode | diff --git a/localstack/services/awslambda/lambda_executors.py b/localstack/services/awslambda/lambda_executors.py
index 17969408fdbc7..272525cbb7f38 100644
--- a/localstack/services/awslambda/lambda_executors.py
+++ b/localstack/services/awslambda/lambda_executors.py
@@ -1112,7 +1112,7 @@ def execute_in_container(
class LambdaExecutorLocal(LambdaExecutor):
def _execute_in_custom_runtime(
- self, cmd: str, lambda_function: LambdaFunction = None
+ self, cmd: Union[str, List[str]], lambda_function: LambdaFunction = None
) -> InvocationResult:
"""
Generic run function for executing lambdas in custom runtimes.
@@ -1313,15 +1313,17 @@ def execute_javascript_lambda(
function = handler.split(".")[-1]
event_json_string = "%s" % (json.dumps(json_safe(event)) if event else "{}")
context_json_string = "%s" % (json.dumps(context.__dict__) if context else "{}")
- cmd = (
- "node -e 'require(\"%s\").%s(%s,%s).then(r => process.stdout.write(JSON.stringify(r)))'"
+ cmd = [
+ "node",
+ "-e",
+ 'require("%s").%s(%s,%s).then(r => process.stdout.write(JSON.stringify(r)))'
% (
main_file,
function,
event_json_string,
context_json_string,
- )
- )
+ ),
+ ]
LOG.info(cmd)
result = self._execute_in_custom_runtime(cmd, lambda_function=lambda_function)
return result
diff --git a/tests/integration/test_lambda.py b/tests/integration/test_lambda.py
index 9de25f54252a7..eff6d56f59ec7 100644
--- a/tests/integration/test_lambda.py
+++ b/tests/integration/test_lambda.py
@@ -1527,6 +1527,37 @@ def test_invoke_nodejs_lambda(self):
# clean up
testutil.delete_lambda_function(TEST_LAMBDA_NAME_JS)
+ def test_invoke_nodejs_lambda_with_payload_containing_quotes(self):
+ handler_file = os.path.join(THIS_FOLDER, "lambdas", "lambda_handler.js")
+ function_name = "test_lambda_%s" % short_uid()
+ testutil.create_lambda_function(
+ func_name=function_name,
+ zip_file=testutil.create_zip_file(handler_file, get_content=True),
+ runtime=LAMBDA_RUNTIME_NODEJS14X,
+ handler="lambda_handler.handler",
+ )
+
+ test_string = "test_string' with some quotes"
+ body = '{"test_var": "%s"}' % test_string
+ try:
+ rs = self.lambda_client.invoke(
+ FunctionName=function_name,
+ Payload=body,
+ )
+ assert 200 == rs["ResponseMetadata"]["HTTPStatusCode"]
+
+ payload = rs["Payload"].read()
+ response = json.loads(to_str(payload))
+ assert "response from localstack lambda" in response["body"]
+
+ events = get_lambda_log_events(function_name)
+ assert len(events) > 0
+ assert test_string in str(events[0])
+
+ finally:
+ # clean up
+ testutil.delete_lambda_function(function_name)
+
class TestCustomRuntimes(LambdaTestBase):
@classmethod
| Closes #4671 .
Replaces the current execution by using the array-based command representation which does not need a shell to run (and has therefore no problem with string escapes using quotes). | https://api.github.com/repos/localstack/localstack/pulls/4718 | 2021-10-10T15:32:05Z | 2021-10-10T21:20:54Z | 2021-10-10T21:20:54Z | 2021-10-10T21:20:56Z | 758 | localstack/localstack | 29,105 |
drop coveralls support | diff --git a/.travis.yml b/.travis.yml
index 7b24e051eca..e857abbd8ea 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -11,11 +11,10 @@ env:
- TOXENV=py33
- TOXENV=docs
install:
- - pip install -U tox twine wheel codecov coveralls
+ - pip install -U tox twine wheel codecov
script: tox
after_success:
- codecov
- - coveralls
notifications:
irc:
use_notice: true
| I don't know why, but coveralls uploads on Travis started to take 30s recently. And we're not using them anyways. What about removing coveralls support?
| https://api.github.com/repos/scrapy/scrapy/pulls/1537 | 2015-10-12T13:14:01Z | 2015-10-12T23:36:12Z | 2015-10-12T23:36:12Z | 2015-10-12T23:36:14Z | 136 | scrapy/scrapy | 34,468 |
[extension/openai] add edits & image endpoints & fix prompt return in non --chat modes | diff --git a/characters/instruction-following/ChatGLM.yaml b/characters/instruction-following/ChatGLM.yaml
index 0e5d3f4135..f25f490899 100644
--- a/characters/instruction-following/ChatGLM.yaml
+++ b/characters/instruction-following/ChatGLM.yaml
@@ -1,4 +1,4 @@
-user: "[Round <|round|>]\n问:"
-bot: "答:"
+user: "[Round <|round|>]\n问:"
+bot: "答:"
turn_template: "<|user|><|user-message|>\n<|bot|><|bot-message|>\n"
context: ""
diff --git a/extensions/openai/README.md b/extensions/openai/README.md
index b4d4ff3a75..b20eba3326 100644
--- a/extensions/openai/README.md
+++ b/extensions/openai/README.md
@@ -11,6 +11,15 @@ Optional (for flask_cloudflared, embeddings):
pip3 install -r requirements.txt
```
+It listens on tcp port 5001 by default. You can use the OPENEDAI_PORT environment variable to change this.
+
+To enable the bare bones image generation (txt2img) set: SD_WEBUI_URL to point to your Stable Diffusion API ([Automatic1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui)).
+
+Example:
+```
+SD_WEBUI_URL=http://127.0.0.1:7861
+```
+
### Embeddings (alpha)
Embeddings requires ```sentence-transformers``` installed, but chat and completions will function without it loaded. The embeddings endpoint is currently using the HuggingFace model: ```sentence-transformers/all-mpnet-base-v2``` for embeddings. This produces 768 dimensional embeddings (the same as the text-davinci-002 embeddings), which is different from OpenAI's current default ```text-embedding-ada-002``` model which produces 1536 dimensional embeddings. The model is small-ish and fast-ish. This model and embedding size may change in the future.
@@ -67,17 +76,22 @@ const api = new ChatGPTAPI({
## Compatibility & not so compatibility
-What's working:
-
| API endpoint | tested with | notes |
| --- | --- | --- |
| /v1/models | openai.Model.list() | returns the currently loaded model_name and some mock compatibility options |
| /v1/models/{id} | openai.Model.get() | returns whatever you ask for, model does nothing yet anyways |
| /v1/text_completion | openai.Completion.create() | the most tested, only supports single string input so far |
| /v1/chat/completions | openai.ChatCompletion.create() | depending on the model, this may add leading linefeeds |
+| /v1/edits | openai.Edit.create() | Assumes an instruction following model, but may work with others |
+| /v1/images/generations | openai.Image.create() | Bare bones, no model configuration, response_format='b64_json' only. |
| /v1/embeddings | openai.Embedding.create() | Using Sentence Transformer, dimensions are different and may never be directly comparable to openai embeddings. |
| /v1/moderations | openai.Moderation.create() | does nothing. successfully. |
| /v1/engines/\*/... completions, embeddings, generate | python-openai v0.25 and earlier | Legacy engines endpoints |
+| /v1/images/edits | openai.Image.create_edit() | not supported |
+| /v1/images/variations | openai.Image.create_variation() | not supported |
+| /v1/audio/\* | openai.Audio.\* | not supported |
+| /v1/files\* | openai.Files.\* | not supported |
+| /v1/fine-tunes\* | openai.FineTune.\* | not supported |
The model name setting is ignored in completions, but you may need to adjust the maximum token length to fit the model (ie. set to <2048 tokens instead of 4096, 8k, etc). To mitigate some of this, the max_tokens value is halved until it is less than truncation_length for the model (typically 2k).
@@ -99,6 +113,10 @@ Some hacky mappings:
defaults are mostly from openai, so are different. I use the openai defaults where I can and try to scale them to the webui defaults with the same intent.
+### Models
+
+This has been successfully tested with Koala, Alpaca, gpt4-x-alpaca, GPT4all-snoozy, wizard-vicuna, stable-vicuna and Vicuna 1.1 - ie. Instruction Following models. If you test with other models please let me know how it goes. Less than satisfying results (so far): RWKV-4-Raven, llama, mpt-7b-instruct/chat
+
### Applications
Everything needs OPENAI_API_KEY=dummy set.
@@ -120,4 +138,7 @@ Everything needs OPENAI_API_KEY=dummy set.
* model changing, esp. something for swapping loras or embedding models
* consider switching to FastAPI + starlette for SSE (openai SSE seems non-standard)
* do something about rate limiting or locking requests for completions, most systems will only be able handle a single request at a time before OOM
-* the whole api, images (stable diffusion), audio (whisper), fine-tunes (training), edits, files, etc.
\ No newline at end of file
+
+## Bugs? Feedback? Comments? Pull requests?
+
+Are all appreciated, please @matatonic and I'll try to get back to you as soon as possible.
diff --git a/extensions/openai/cache_embedding_model.py b/extensions/openai/cache_embedding_model.py
new file mode 100755
index 0000000000..44ac1dcd66
--- /dev/null
+++ b/extensions/openai/cache_embedding_model.py
@@ -0,0 +1,8 @@
+#!/usr/bin/env python3
+# preload the embedding model, useful for Docker images to prevent re-download on config change
+# Dockerfile:
+# ENV OPENEDAI_EMBEDDING_MODEL=all-mpnet-base-v2 # Optional
+# RUN python3 cache_embedded_model.py
+import os, sentence_transformers
+st_model = os.environ["OPENEDAI_EMBEDDING_MODEL"] if "OPENEDAI_EMBEDDING_MODEL" in os.environ else "all-mpnet-base-v2"
+model = sentence_transformers.SentenceTransformer(st_model)
diff --git a/extensions/openai/script.py b/extensions/openai/script.py
index c46dbe04b7..711b76a2c7 100644
--- a/extensions/openai/script.py
+++ b/extensions/openai/script.py
@@ -2,6 +2,8 @@
import json
import os
import time
+import requests
+import yaml
from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer
from threading import Thread
@@ -48,6 +50,31 @@ def clamp(value, minvalue, maxvalue):
return max(minvalue, min(value, maxvalue))
+def deduce_template():
+ # Alpaca is verbose so a good default prompt
+ default_template = (
+ "Below is an instruction that describes a task, paired with an input that provides further context. "
+ "Write a response that appropriately completes the request.\n\n"
+ "### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
+ )
+
+ # Use the special instruction/input/response template for anything trained like Alpaca
+ if shared.settings['instruction_template'] in ['Alpaca', 'Alpaca-Input']:
+ return default_template
+
+ try:
+ instruct = yaml.safe_load(open(f"characters/instruction-following/{shared.settings['instruction_template']}.yaml", 'r'))
+
+ template = instruct['turn_template']
+ template = template\
+ .replace('<|user|>', instruct.get('user', ''))\
+ .replace('<|bot|>', instruct.get('bot', ''))\
+ .replace('<|user-message|>', '{instruction}\n{input}')
+ return instruct.get('context', '') + template[:template.find('<|bot-message|>')]
+ except:
+ return default_template
+
+
def float_list_to_base64(float_list):
# Convert the list to a float32 array that the OpenAPI client expects
float_array = np.array(float_list, dtype="float32")
@@ -120,11 +147,20 @@ def do_GET(self):
self.send_error(404)
def do_POST(self):
- content_length = int(self.headers['Content-Length'])
- body = json.loads(self.rfile.read(content_length).decode('utf-8'))
+ # ... haaack.
+ is_chat = shared.args.chat
+ try:
+ shared.args.chat = True
+ self.do_POST_wrap()
+ finally:
+ shared.args.chat = is_chat
+ def do_POST_wrap(self):
if debug:
print(self.headers) # did you know... python-openai sends your linux kernel & python version?
+ content_length = int(self.headers['Content-Length'])
+ body = json.loads(self.rfile.read(content_length).decode('utf-8'))
+
if debug:
print(body)
@@ -150,7 +186,7 @@ def do_POST(self):
truncation_length = default(shared.settings, 'truncation_length', 2048)
truncation_length = clamp(default(body, 'truncation_length', truncation_length), 1, truncation_length)
- default_max_tokens = truncation_length if is_chat else 16 # completions default, chat default is 'inf' so we need to cap it., the default for chat is "inf"
+ default_max_tokens = truncation_length if is_chat else 16 # completions default, chat default is 'inf' so we need to cap it.
max_tokens_str = 'length' if is_legacy else 'max_tokens'
max_tokens = default(body, max_tokens_str, default(shared.settings, 'max_new_tokens', default_max_tokens))
@@ -440,6 +476,129 @@ def do_POST(self):
else:
resp[resp_list][0]["text"] = answer
+ response = json.dumps(resp)
+ self.wfile.write(response.encode('utf-8'))
+ elif '/edits' in self.path:
+ self.send_response(200)
+ self.send_header('Content-Type', 'application/json')
+ self.end_headers()
+
+ created_time = int(time.time())
+
+ # Using Alpaca format, this may work with other models too.
+ instruction = body['instruction']
+ input = body.get('input', '')
+
+ instruction_template = deduce_template()
+ edit_task = instruction_template.format(instruction=instruction, input=input)
+
+ truncation_length = default(shared.settings, 'truncation_length', 2048)
+ token_count = len(encode(edit_task)[0])
+ max_tokens = truncation_length - token_count
+
+ req_params = {
+ 'max_new_tokens': max_tokens,
+ 'temperature': clamp(default(body, 'temperature', 1.0), 0.001, 1.999),
+ 'top_p': clamp(default(body, 'top_p', 1.0), 0.001, 1.0),
+ 'top_k': 1,
+ 'repetition_penalty': 1.18,
+ 'encoder_repetition_penalty': 1.0,
+ 'suffix': None,
+ 'stream': False,
+ 'echo': False,
+ 'seed': shared.settings.get('seed', -1),
+ # 'n' : default(body, 'n', 1), # 'n' doesn't have a direct map
+ 'truncation_length': truncation_length,
+ 'add_bos_token': shared.settings.get('add_bos_token', True),
+ 'do_sample': True,
+ 'typical_p': 1.0,
+ 'min_length': 0,
+ 'no_repeat_ngram_size': 0,
+ 'num_beams': 1,
+ 'penalty_alpha': 0.0,
+ 'length_penalty': 1,
+ 'early_stopping': False,
+ 'ban_eos_token': False,
+ 'skip_special_tokens': True,
+ 'custom_stopping_strings': [],
+ }
+
+ if debug:
+ print({'edit_template': edit_task, 'req_params': req_params, 'token_count': token_count})
+
+ generator = generate_reply(edit_task, req_params, stopping_strings=standard_stopping_strings)
+
+ answer = ''
+ for a in generator:
+ if isinstance(a, str):
+ answer = a
+ else:
+ answer = a[0]
+
+ completion_token_count = len(encode(answer)[0])
+
+ resp = {
+ "object": "edit",
+ "created": created_time,
+ "choices": [{
+ "text": answer,
+ "index": 0,
+ }],
+ "usage": {
+ "prompt_tokens": token_count,
+ "completion_tokens": completion_token_count,
+ "total_tokens": token_count + completion_token_count
+ }
+ }
+
+ if debug:
+ print({'answer': answer, 'completion_token_count': completion_token_count})
+
+ response = json.dumps(resp)
+ self.wfile.write(response.encode('utf-8'))
+ elif '/images/generations' in self.path and 'SD_WEBUI_URL' in os.environ:
+ # Stable Diffusion callout wrapper for txt2img
+ # Low effort implementation for compatibility. With only "prompt" being passed and assuming DALL-E
+ # the results will be limited and likely poor. SD has hundreds of models and dozens of settings.
+ # If you want high quality tailored results you should just use the Stable Diffusion API directly.
+ # it's too general an API to try and shape the result with specific tags like "masterpiece", etc,
+ # Will probably work best with the stock SD models.
+ # SD configuration is beyond the scope of this API.
+ # At this point I will not add the edits and variations endpoints (ie. img2img) because they
+ # require changing the form data handling to accept multipart form data, also to properly support
+ # url return types will require file management and a web serving files... Perhaps later!
+
+ self.send_response(200)
+ self.send_header('Content-Type', 'application/json')
+ self.end_headers()
+
+ width, height = [ int(x) for x in default(body, 'size', '1024x1024').split('x') ] # ignore the restrictions on size
+ response_format = default(body, 'response_format', 'url') # or b64_json
+
+ payload = {
+ 'prompt': body['prompt'], # ignore prompt limit of 1000 characters
+ 'width': width,
+ 'height': height,
+ 'batch_size': default(body, 'n', 1) # ignore the batch limits of max 10
+ }
+
+ resp = {
+ 'created': int(time.time()),
+ 'data': []
+ }
+
+ # TODO: support SD_WEBUI_AUTH username:password pair.
+ sd_url = f"{os.environ['SD_WEBUI_URL']}/sdapi/v1/txt2img"
+
+ response = requests.post(url=sd_url, json=payload)
+ r = response.json()
+ # r['parameters']...
+ for b64_json in r['images']:
+ if response_format == 'b64_json':
+ resp['data'].extend([{'b64_json': b64_json}])
+ else:
+ resp['data'].extend([{'url': f'data:image/png;base64,{b64_json}'}]) # yeah it's lazy. requests.get() will not work with this
+
response = json.dumps(resp)
self.wfile.write(response.encode('utf-8'))
elif '/embeddings' in self.path and embedding_model is not None:
@@ -540,11 +699,12 @@ def run_server():
try:
from flask_cloudflared import _run_cloudflared
public_url = _run_cloudflared(params['port'], params['port'] + 1)
- print(f'Starting OpenAI compatible api at {public_url}/')
+ print(f'Starting OpenAI compatible api at\nOPENAI_API_BASE={public_url}/v1')
except ImportError:
print('You should install flask_cloudflared manually')
else:
- print(f'Starting OpenAI compatible api at http://{server_addr[0]}:{server_addr[1]}/')
+ print(f'Starting OpenAI compatible api:\nOPENAI_API_BASE=http://{server_addr[0]}:{server_addr[1]}/v1')
+
server.serve_forever()
diff --git a/models/config.yaml b/models/config.yaml
index f5c9d508b9..2bef3ce50f 100644
--- a/models/config.yaml
+++ b/models/config.yaml
@@ -54,6 +54,9 @@
.*vicuna.*(1.1|1_1):
mode: 'instruct'
instruction_template: 'Vicuna-v1.1'
+.*wizard.*vicuna:
+ mode: 'instruct'
+ instruction_template: 'Vicuna-v1.1'
.*stable.*vicuna:
mode: 'instruct'
instruction_template: 'StableVicuna'
@@ -135,4 +138,4 @@
instruction_template: 'INCITE-Chat'
.*incite.*instruct:
mode: 'instruct'
- instruction_template: 'INCITE-Instruct'
\ No newline at end of file
+ instruction_template: 'INCITE-Instruct'
| This includes correct instruction templates for each model (not just Alpaca). For now they're in the code, but I will change them to use the on disk format when they're fully compatible.
I found some small differences with different instruction formats, but I don't want to include those changes here. I will update those in another PR. | https://api.github.com/repos/oobabooga/text-generation-webui/pulls/1935 | 2023-05-09T05:36:33Z | 2023-05-11T14:06:40Z | 2023-05-11T14:06:40Z | 2023-05-11T14:07:29Z | 3,994 | oobabooga/text-generation-webui | 26,368 |
Langchain: Fix quickstart doc code not working | diff --git a/docs/docs/get_started/quickstart.mdx b/docs/docs/get_started/quickstart.mdx
index 509b1fe6b440b3..85247950613b1d 100644
--- a/docs/docs/get_started/quickstart.mdx
+++ b/docs/docs/get_started/quickstart.mdx
@@ -223,6 +223,7 @@ First we need to install the required packages for that:
```shell
pip install docarray
+pip install tiktoken
```
Then we can build our index:
@@ -313,7 +314,7 @@ from langchain_core.prompts import MessagesPlaceholder
prompt = ChatPromptTemplate.from_messages([
MessagesPlaceholder(variable_name="chat_history"),
- ("user", "{input}")
+ ("user", "{input}"),
("user", "Given the above conversation, generate a search query to look up in order to get information relevant to the conversation")
])
retriever_chain = create_history_aware_retriever(llm, retriever, prompt)
@@ -403,6 +404,13 @@ tools = [retriever_tool, search]
Now that we have the tools, we can create an agent to use them. We will go over this pretty quickly - for a deeper dive into what exactly is going on, check out the [Agent's Getting Started documentation](/docs/modules/agents)
+Install langchain hub first
+```bash
+pip install langchainhub
+```
+
+Now we can use it to get a predefined prompt
+
```python
from langchain.chat_models import ChatOpenAI
from langchain import hub
| The quickstart doc is missing a few but very simple things that without them, the code does not work. This PR fixes that by
- Adding commands to install `tiktoken` and `langchainhub`
- Adds a comma between 2 parameters for one of the methods
| https://api.github.com/repos/langchain-ai/langchain/pulls/15352 | 2023-12-31T02:58:53Z | 2024-01-01T21:38:33Z | 2024-01-01T21:38:33Z | 2024-01-02T19:41:26Z | 345 | langchain-ai/langchain | 43,612 |
fix: kg rag should work on all graph stores | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 619d5d084ff65..b87a5b5b1ccda 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -8,6 +8,7 @@
### Bug Fixes / Nits
- Only convert newlines to spaces for text 001 embedding models in OpenAI (#7484)
+- Fix `KnowledgeGraphRagRetriever` for non-nebula indexes (#7488)
## [0.8.14] - 2023-08-30
diff --git a/llama_index/indices/knowledge_graph/retrievers.py b/llama_index/indices/knowledge_graph/retrievers.py
index 9094d30924625..241e2da598c40 100644
--- a/llama_index/indices/knowledge_graph/retrievers.py
+++ b/llama_index/indices/knowledge_graph/retrievers.py
@@ -626,7 +626,7 @@ def _get_knowledge_sequence(
knowledge_sequence = []
if rel_map:
knowledge_sequence.extend(
- [rel_obj for rel_objs in rel_map.values() for rel_obj in rel_objs]
+ [str(rel_obj) for rel_objs in rel_map.values() for rel_obj in rel_objs]
)
else:
logger.info("> No knowledge sequence extracted from entities.")
@@ -649,7 +649,7 @@ async def _aget_knowledge_sequence(
knowledge_sequence = []
if rel_map:
knowledge_sequence.extend(
- [rel_obj for rel_objs in rel_map.values() for rel_obj in rel_objs]
+ [str(rel_obj) for rel_objs in rel_map.values() for rel_obj in rel_objs]
)
else:
logger.info("> No knowledge sequence extracted from entities.")
| # Description
This change fixed kg rag retriever on all graph_store, similar approaches had already been done in KG Table retrievers, but I forgot to handle this divisive behavior when impl. this one.
Fixes https://github.com/jerryjliu/llama_index/issues/7483
## Type of Change
Please delete options that are not relevant.
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] This change requires a documentation update
# How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
- [ ] Added new unit/integration tests
- [x] [notebook](https://colab.research.google.com/drive/1E9tqUX6CJecNwWFxCYZgSPMxuyUHS725?usp=sharing) (that tests end-to-end)
- [ ] I stared at the code and made sure it makes sense
# Suggested Checklist:
- [x] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [ ] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] New and existing unit tests pass locally with my changes
| https://api.github.com/repos/run-llama/llama_index/pulls/7488 | 2023-08-31T02:28:02Z | 2023-08-31T04:52:33Z | 2023-08-31T04:52:33Z | 2023-08-31T04:52:33Z | 395 | run-llama/llama_index | 6,400 |
Pretty attrs | diff --git a/CHANGELOG.md b/CHANGELOG.md
index f1d30bbf9..76ea275ae 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -12,6 +12,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Added syntax for call, i.e. "Foo(bar)"
- Fixed initial blank lines removed from Syntax https://github.com/willmcgugan/rich/issues/1214
- Added Console.measure as a convenient alias for Measurement.get
+- Added support for pretty printing attrs objects
## [10.1.0] - 2020-04-03
diff --git a/examples/attrs.py b/examples/attrs.py
new file mode 100644
index 000000000..4928405f7
--- /dev/null
+++ b/examples/attrs.py
@@ -0,0 +1,58 @@
+from typing import List
+
+try:
+ import attr
+except ImportError:
+ print("This example requires attrs library")
+ print("pip install attrs")
+ raise
+
+
+@attr.define
+class Point3D:
+ x: float
+ y: float
+ z: float = 0
+
+
+@attr.define
+class Triangle:
+ point1: Point3D
+ point2: Point3D
+ point3: Point3D
+
+
+@attr.define
+class Model:
+ name: str
+ triangles: List[Triangle] = attr.Factory(list)
+
+
+if __name__ == "__main__":
+ model = Model(
+ name="Alien#1",
+ triangles=[
+ Triangle(
+ Point3D(x=20, y=50),
+ Point3D(x=50, y=15, z=-45.34),
+ Point3D(3.1426, 83.2323, -16),
+ )
+ ],
+ )
+
+ from rich.console import Console
+
+ console = Console()
+
+ console.print(
+ "\nRich can pretty print [b]attrs[/b] objects ( https://www.attrs.org/en/stable/ )\n",
+ justify="center",
+ )
+
+ console.rule("attrs without Rich")
+
+ print(model)
+
+ console.rule("attrs with Rich")
+
+ console.print(model)
diff --git a/poetry.lock b/poetry.lock
index 9a94b5f6e..00992afec 100644
--- a/poetry.lock
+++ b/poetry.lock
@@ -49,17 +49,17 @@ python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "attrs"
-version = "21.1.0"
+version = "20.3.0"
description = "Classes Without Boilerplate"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.extras]
-dev = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "mypy", "pytest-mypy-plugins", "zope.interface", "furo", "sphinx", "sphinx-notfound-page", "pre-commit"]
-docs = ["furo", "sphinx", "zope.interface", "sphinx-notfound-page"]
-tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "mypy", "pytest-mypy-plugins", "zope.interface"]
-tests_no_zope = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "mypy", "pytest-mypy-plugins"]
+dev = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "zope.interface", "furo", "sphinx", "pre-commit"]
+docs = ["furo", "sphinx", "zope.interface"]
+tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "zope.interface"]
+tests_no_zope = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six"]
[[package]]
name = "backcall"
@@ -891,7 +891,7 @@ jupyter = ["ipywidgets"]
[metadata]
lock-version = "1.1"
python-versions = "^3.6"
-content-hash = "9e43bf9c815b3ce6471d7cee0281c57b457375773358f2de76cdb1a5c8644730"
+content-hash = "3cacbff8606717bf2ba96796c8e29a386fa24c587199234626627027e2eb576d"
[metadata.files]
appdirs = [
@@ -931,8 +931,8 @@ atomicwrites = [
{file = "atomicwrites-1.4.0.tar.gz", hash = "sha256:ae70396ad1a434f9c7046fd2dd196fc04b12f9e91ffb859164193be8b6168a7a"},
]
attrs = [
- {file = "attrs-21.1.0-py2.py3-none-any.whl", hash = "sha256:8ee1e5f5a1afc5b19bdfae4fdf0c35ed324074bdce3500c939842c8f818645d9"},
- {file = "attrs-21.1.0.tar.gz", hash = "sha256:3901be1cb7c2a780f14668691474d9252c070a756be0a9ead98cfeabfa11aeb8"},
+ {file = "attrs-20.3.0-py2.py3-none-any.whl", hash = "sha256:31b2eced602aa8423c2aea9c76a724617ed67cf9513173fd3a4f03e3a929c7e6"},
+ {file = "attrs-20.3.0.tar.gz", hash = "sha256:832aa3cde19744e49938b91fea06d69ecb9e649c93ba974535d08ad92164f700"},
]
backcall = [
{file = "backcall-0.2.0-py2.py3-none-any.whl", hash = "sha256:fbbce6a29f263178a1f7915c1940bde0ec2b2a967566fe1c65c1dfb7422bd255"},
diff --git a/pyproject.toml b/pyproject.toml
index 09f7abd47..9e5951183 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -41,6 +41,7 @@ pytest = "^6.2.3"
black = "^20.8b1"
mypy = "^0.812"
pytest-cov = "^2.11.1"
+attrs = "^20.1.0"
[build-system]
requires = ["poetry-core>=1.0.0"]
diff --git a/rich/pretty.py b/rich/pretty.py
index 1e2f532d4..49a782560 100644
--- a/rich/pretty.py
+++ b/rich/pretty.py
@@ -18,8 +18,23 @@
Tuple,
)
-from rich.highlighter import ReprHighlighter
+try:
+ import attr as _attr_module
+except ImportError: # pragma: no cover
+ _attr_module = None # type: ignore
+
+def _is_attr_object(obj: Any) -> bool:
+ """Check if an object was created with attrs module."""
+ return _attr_module is not None and _attr_module.has(type(obj))
+
+
+def _get_attr_fields(obj: Any) -> Iterable["_attr_module.Attribute"]:
+ """Get fields for an attrs object."""
+ return _attr_module.fields(type(obj)) if _attr_module is not None else []
+
+
+from .highlighter import ReprHighlighter
from . import get_console
from ._loop import loop_last
from ._pick import pick_bool
@@ -264,6 +279,7 @@ def is_expandable(obj: Any) -> bool:
isinstance(obj, _CONTAINERS)
or (is_dataclass(obj) and not isinstance(obj, type))
or hasattr(obj, "__rich_repr__")
+ or _is_attr_object(obj)
)
@@ -489,7 +505,6 @@ def iter_rich_args(rich_args) -> Iterable[Union[Any, Tuple[str, Any]]]:
child_node = _traverse(child)
child_node.last = last
child_node.key_repr = key
- child_node.last = last
child_node.key_separator = "="
append(child_node)
else:
@@ -500,6 +515,50 @@ def iter_rich_args(rich_args) -> Iterable[Union[Any, Tuple[str, Any]]]:
node = Node(
value_repr=f"{obj.__class__.__name__}()", children=[], last=root
)
+ elif _is_attr_object(obj):
+ children = []
+ append = children.append
+
+ attr_fields = _get_attr_fields(obj)
+ if attr_fields:
+ node = Node(
+ open_brace=f"{obj.__class__.__name__}(",
+ close_brace=")",
+ children=children,
+ last=root,
+ )
+
+ def iter_attrs() -> Iterable[
+ Tuple[str, Any, Optional[Callable[[Any], str]]]
+ ]:
+ """Iterate over attr fields and values."""
+ for attr in attr_fields:
+ if attr.repr:
+ try:
+ value = getattr(obj, attr.name)
+ except Exception as error:
+ # Can happen, albeit rarely
+ yield (attr.name, error, None)
+ else:
+ yield (
+ attr.name,
+ value,
+ attr.repr if callable(attr.repr) else None,
+ )
+
+ for last, (name, value, repr_callable) in loop_last(iter_attrs()):
+ if repr_callable:
+ child_node = Node(value_repr=str(repr_callable(value)))
+ else:
+ child_node = _traverse(value)
+ child_node.last = last
+ child_node.key_repr = name
+ child_node.key_separator = "="
+ append(child_node)
+ else:
+ node = Node(
+ value_repr=f"{obj.__class__.__name__}()", children=[], last=root
+ )
elif (
is_dataclass(obj)
@@ -629,7 +688,7 @@ def pprint(
max_length: Optional[int] = None,
max_string: Optional[int] = None,
expand_all: bool = False,
-):
+) -> None:
"""A convenience function for pretty printing.
Args:
diff --git a/tests/test_pretty.py b/tests/test_pretty.py
index d8c046c27..cb17e064c 100644
--- a/tests/test_pretty.py
+++ b/tests/test_pretty.py
@@ -5,10 +5,19 @@
import sys
from typing import List
+import attr
+import pytest
+
from rich.console import Console
from rich.pretty import install, Pretty, pprint, pretty_repr, Node
+skip_py36 = pytest.mark.skipif(
+ sys.version_info.minor == 6 and sys.version_info.major == 3,
+ reason="rendered differently on py3.6",
+)
+
+
def test_install():
console = Console(file=io.StringIO())
dh = sys.displayhook
@@ -181,3 +190,42 @@ def __repr__(self):
return ""
assert pretty_repr(Foo()) == ""
+
+
+def test_attrs():
+ @attr.define
+ class Point:
+ x: int
+ y: int
+ foo: str = attr.field(repr=str.upper)
+ z: int = 0
+
+ result = pretty_repr(Point(1, 2, foo="bar"))
+ print(repr(result))
+ expected = "Point(x=1, y=2, foo=BAR, z=0)"
+ assert result == expected
+
+
+def test_attrs_empty():
+ @attr.define
+ class Nada:
+ pass
+
+ result = pretty_repr(Nada())
+ print(repr(result))
+ expected = "Nada()"
+ assert result == expected
+
+
+@skip_py36
+def test_attrs_broken():
+ @attr.define
+ class Foo:
+ bar: int
+
+ foo = Foo(1)
+ del foo.bar
+ result = pretty_repr(foo)
+ print(repr(result))
+ expected = "Foo(bar=AttributeError('bar'))"
+ assert result == expected
| Adds pretty printing support for attrs module.
| https://api.github.com/repos/Textualize/rich/pulls/1217 | 2021-05-07T10:45:43Z | 2021-05-08T11:16:02Z | 2021-05-08T11:16:02Z | 2021-05-09T13:28:48Z | 3,008 | Textualize/rich | 48,024 |
Fixed Golang net/http param pollution outcome | diff --git a/HTTP Parameter Pollution/README.md b/HTTP Parameter Pollution/README.md
index ad739b6e88..16eb526885 100644
--- a/HTTP Parameter Pollution/README.md
+++ b/HTTP Parameter Pollution/README.md
@@ -36,12 +36,12 @@ When ?par1=a&par1=b
| Python Django |Last occurrence |b |
| Nodejs |All occurrences |a,b |
| Golang net/http - `r.URL.Query().Get("param")` |First occurrence |a |
-| Golang net/http - `r.URL.Query()["param"]` |All occurrences |a,b |
+| Golang net/http - `r.URL.Query()["param"]` |All occurrences in array |['a','b'] |
| IBM Lotus Domino |First occurrence |a |
| IBM HTTP Server |First occurrence |a |
| Perl CGI/Apache |First occurrence |a |
| mod_wsgi (Python)/Apache |First occurrence |a |
-| Python/Zope |All occurrences in array |['a','b'] |
+| Python/Zope |All occurrences in array |['a','b'] |
| Ruby on Rails |Last occurrence |b |
## References
| Fixed Golang net/http param pollution outcome since it returns an array (as the Python/Zope outcome).
Tested with the following:
`test.go`
```golang
package main
import (
"log"
"net/http"
)
func main() {
http.ListenAndServe(":1234", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
log.Println("Indexing:", r.URL.Query()["param"], "Get():", r.URL.Query().Get("param"))
}))
}
```
```sh
$ go run ./test.go & sleep 1 && curl 'localhost:1234?param=one¶m=two'
2023/04/14 14:51:20 Indexing: [one two] Get(): one
``` | https://api.github.com/repos/swisskyrepo/PayloadsAllTheThings/pulls/635 | 2023-04-14T12:58:32Z | 2023-04-14T15:48:01Z | 2023-04-14T15:48:01Z | 2023-04-14T15:48:01Z | 293 | swisskyrepo/PayloadsAllTheThings | 8,362 |
[README] Config Github codespace and Dev container | diff --git a/.devcontainer/README.md b/.devcontainer/README.md
new file mode 100644
index 000000000..dd088aab1
--- /dev/null
+++ b/.devcontainer/README.md
@@ -0,0 +1,39 @@
+# Dev container
+
+This project includes a [dev container](https://containers.dev/), which lets you use a container as a full-featured dev environment.
+
+You can use the dev container configuration in this folder to build and start running MetaGPT locally! For more, refer to the main README under the home directory.
+You can use it in [GitHub Codespaces](https://github.com/features/codespaces) or the [VS Code Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers).
+
+## GitHub Codespaces
+<a href="https://codespaces.new/geekan/MetaGPT"><img src="https://github.com/codespaces/badge.svg" alt="Open in GitHub Codespaces"></a>
+
+You may use the button above to open this repo in a Codespace
+
+For more info, check out the [GitHub documentation](https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces/creating-a-codespace#creating-a-codespace).
+
+## VS Code Dev Containers
+<a href="https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/geekan/MetaGPT"><img src="https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode" alt="Open in Dev Containers"></a>
+
+Note: If you click this link you will open the main repo and not your local cloned repo, you can use this link and replace with your username and cloned repo name:
+https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/geekan/MetaGPT
+
+
+If you already have VS Code and Docker installed, you can use the button above to get started. This will cause VS Code to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.
+
+You can also follow these steps to open this repo in a container using the VS Code Dev Containers extension:
+
+1. If this is your first time using a development container, please ensure your system meets the pre-reqs (i.e. have Docker installed) in the [getting started steps](https://aka.ms/vscode-remote/containers/getting-started).
+
+2. Open a locally cloned copy of the code:
+
+ - Fork and Clone this repository to your local filesystem.
+ - Press <kbd>F1</kbd> and select the **Dev Containers: Open Folder in Container...** command.
+ - Select the cloned copy of this folder, wait for the container to start, and try things out!
+
+You can learn more in the [Dev Containers documentation](https://code.visualstudio.com/docs/devcontainers/containers).
+
+## Tips and tricks
+
+* If you are working with the same repository folder in a container and Windows, you'll want consistent line endings (otherwise you may see hundreds of changes in the SCM view). The `.gitattributes` file in the root of this repo will disable line ending conversion and should prevent this. See [tips and tricks](https://code.visualstudio.com/docs/devcontainers/tips-and-tricks#_resolving-git-line-ending-issues-in-containers-resulting-in-many-modified-files) for more info.
+* If you'd like to review the contents of the image used in this dev container, you can check it out in the [devcontainers/images](https://github.com/devcontainers/images/tree/main/src/python) repo.
diff --git a/.devcontainer/devcontainer.json b/.devcontainer/devcontainer.json
new file mode 100644
index 000000000..a774d0ed1
--- /dev/null
+++ b/.devcontainer/devcontainer.json
@@ -0,0 +1,27 @@
+// For format details, see https://aka.ms/devcontainer.json. For config options, see the
+// README at: https://github.com/devcontainers/templates/tree/main/src/python
+{
+ "name": "Python 3",
+ // Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
+ "image": "mcr.microsoft.com/devcontainers/python:0-3.11",
+
+ // Features to add to the dev container. More info: https://containers.dev/features.
+ // "features": {},
+
+ // Configure tool-specific properties.
+ "customizations": {
+ // Configure properties specific to VS Code.
+ "vscode": {
+ "settings": {},
+ "extensions": [
+ "streetsidesoftware.code-spell-checker"
+ ]
+ }
+ },
+
+ // Use 'postCreateCommand' to run commands after the container is created.
+ "postCreateCommand": "./.devcontainer/postCreateCommand.sh"
+
+ // Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
+ // "remoteUser": "root"
+}
diff --git a/.devcontainer/docker-compose.yaml b/.devcontainer/docker-compose.yaml
new file mode 100644
index 000000000..a9988b1f3
--- /dev/null
+++ b/.devcontainer/docker-compose.yaml
@@ -0,0 +1,31 @@
+version: '3'
+services:
+ metagpt:
+ build:
+ dockerfile: Dockerfile
+ context: ..
+ volumes:
+ # Update this to wherever you want VS Code to mount the folder of your project
+ - ..:/workspaces:cached
+ networks:
+ - metagpt-network
+ # environment:
+ # MONGO_ROOT_USERNAME: root
+ # MONGO_ROOT_PASSWORD: example123
+ # depends_on:
+ # - mongo
+ # mongo:
+ # image: mongo
+ # restart: unless-stopped
+ # environment:
+ # MONGO_INITDB_ROOT_USERNAME: root
+ # MONGO_INITDB_ROOT_PASSWORD: example123
+ # ports:
+ # - "27017:27017"
+ # networks:
+ # - metagpt-network
+
+networks:
+ metagpt-network:
+ driver: bridge
+
diff --git a/.devcontainer/postCreateCommand.sh b/.devcontainer/postCreateCommand.sh
new file mode 100644
index 000000000..06d12e408
--- /dev/null
+++ b/.devcontainer/postCreateCommand.sh
@@ -0,0 +1,7 @@
+# Step 1: Ensure that NPM is installed on your system. Then install mermaid-js.
+npm --version
+sudo npm install -g @mermaid-js/mermaid-cli
+
+# Step 2: Ensure that Python 3.9+ is installed on your system. You can check this by using:
+python --version
+python setup.py install
\ No newline at end of file
diff --git a/README.md b/README.md
index 7eaaa2f69..83536bbea 100644
--- a/README.md
+++ b/README.md
@@ -19,6 +19,11 @@
<a href="https://twitter.com/DeepWisdom2019"><img src="https://img.shields.io/twitter/follow/MetaGPT?style=social" alt="Twitter Follow"></a>
</p>
+<p align="center">
+ <a href="https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/geekan/MetaGPT"><img src="https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode" alt="Open in Dev Containers"></a>
+ <a href="https://codespaces.new/geekan/MetaGPT"><img src="https://img.shields.io/badge/Github_Codespace-Open-blue?logo=github" alt="Open in GitHub Codespaces"></a>
+</p>
+
1. MetaGPT takes a **one line requirement** as input and outputs **user stories / competitive analysis / requirements / data structures / APIs / documents, etc.**
2. Internally, MetaGPT includes **product managers / architects / project managers / engineers.** It provides the entire process of a **software company along with carefully orchestrated SOPs.**
1. `Code = SOP(Team)` is the core philosophy. We materialize SOP and apply it to teams composed of LLMs.
| https://api.github.com/repos/geekan/MetaGPT/pulls/186 | 2023-08-10T01:19:14Z | 2023-08-23T07:44:22Z | 2023-08-23T07:44:22Z | 2023-08-23T07:44:22Z | 1,986 | geekan/MetaGPT | 17,024 |
|
STY Improve css block for scroll bars | diff --git a/doc/themes/scikit-learn-modern/static/css/theme.css b/doc/themes/scikit-learn-modern/static/css/theme.css
index ceda27c6de093..b22b700736f56 100644
--- a/doc/themes/scikit-learn-modern/static/css/theme.css
+++ b/doc/themes/scikit-learn-modern/static/css/theme.css
@@ -83,12 +83,12 @@ span.highlighted {
}
div.highlight {
- padding: 0.2rem 0.5rem;
border: 1px solid #ddd;
margin-bottom: 1rem;
}
div.highlight pre {
+ padding: 0.2rem 0.5rem;
margin-bottom: 0;
line-height: 1.2rem;
}
| Makes sure that the scroll bar is flush with the code block. (when the screen is narrow (for example on mobile)
## PR
in the [release highlights](https://105249-843222-gh.circle-artifacts.com/0/doc/auto_examples/release_highlights/plot_release_highlights_0_23_0.html#generalized-linear-models-and-poisson-loss-for-gradient-boosting)
![Screen Shot 2020-06-19 at 2 28 54 PM](https://user-images.githubusercontent.com/5402633/85169160-61af9b80-b239-11ea-8b36-b872c89558ee.png)
## Master
in the [release highlights](https://scikit-learn.org/stable/auto_examples/release_highlights/plot_release_highlights_0_23_0.html#generalized-linear-models-and-poisson-loss-for-gradient-boosting)
![Screen Shot 2020-06-19 at 2 28 43 PM](https://user-images.githubusercontent.com/5402633/85169168-65dbb900-b239-11ea-9588-5c2753f2405a.png)
| https://api.github.com/repos/scikit-learn/scikit-learn/pulls/17415 | 2020-06-01T20:05:55Z | 2020-06-19T21:41:53Z | 2020-06-19T21:41:53Z | 2020-06-19T21:41:53Z | 172 | scikit-learn/scikit-learn | 46,310 |
Fix formatting for `if` clauses in `match-case` blocks | diff --git a/CHANGES.md b/CHANGES.md
index e28730a3b5f..fe234380799 100644
--- a/CHANGES.md
+++ b/CHANGES.md
@@ -16,6 +16,9 @@
<!-- Changes that affect Black's preview style -->
+- `if` guards in `case` blocks are now wrapped in parentheses when the line is too long.
+ (#4269)
+
### Configuration
<!-- Changes to how Black can be configured -->
diff --git a/docs/the_black_code_style/future_style.md b/docs/the_black_code_style/future_style.md
index 4ae46cecded..e0f45b47106 100644
--- a/docs/the_black_code_style/future_style.md
+++ b/docs/the_black_code_style/future_style.md
@@ -34,6 +34,8 @@ Currently, the following features are included in the preview style:
quotes of a docstring
- `remove_redundant_guard_parens`: Removes redundant parentheses in `if` guards for
`case` blocks.
+- `parens_for_long_if_clauses_in_case_block`: Adds parentheses to `if` clauses in `case`
+ blocks when the the line is too long
(labels/unstable-features)=
diff --git a/src/black/linegen.py b/src/black/linegen.py
index e34ff040c73..2d9c27a6141 100644
--- a/src/black/linegen.py
+++ b/src/black/linegen.py
@@ -1310,6 +1310,16 @@ def normalize_invisible_parens( # noqa: C901
child, parens_after={"case"}, mode=mode, features=features
)
+ # Add parentheses around if guards in case blocks
+ if (
+ isinstance(child, Node)
+ and child.type == syms.guard
+ and Preview.parens_for_long_if_clauses_in_case_block in mode
+ ):
+ normalize_invisible_parens(
+ child, parens_after={"if"}, mode=mode, features=features
+ )
+
# Add parentheses around long tuple unpacking in assignments.
if (
index == 0
diff --git a/src/black/mode.py b/src/black/mode.py
index 90c10c324a5..b54f355e20a 100644
--- a/src/black/mode.py
+++ b/src/black/mode.py
@@ -180,6 +180,7 @@ class Preview(Enum):
is_simple_lookup_for_doublestar_expression = auto()
docstring_check_for_newline = auto()
remove_redundant_guard_parens = auto()
+ parens_for_long_if_clauses_in_case_block = auto()
UNSTABLE_FEATURES: Set[Preview] = {
diff --git a/src/black/resources/black.schema.json b/src/black/resources/black.schema.json
index 8252a6c4bd8..5c800775d57 100644
--- a/src/black/resources/black.schema.json
+++ b/src/black/resources/black.schema.json
@@ -88,7 +88,8 @@
"typed_params_trailing_comma",
"is_simple_lookup_for_doublestar_expression",
"docstring_check_for_newline",
- "remove_redundant_guard_parens"
+ "remove_redundant_guard_parens",
+ "parens_for_long_if_clauses_in_case_block"
]
},
"description": "Enable specific features included in the `--unstable` style. Requires `--preview`. No compatibility guarantees are provided on the behavior or existence of any unstable features."
diff --git a/tests/data/cases/pattern_matching_complex.py b/tests/data/cases/pattern_matching_complex.py
index ba64f2639a0..028832d772a 100644
--- a/tests/data/cases/pattern_matching_complex.py
+++ b/tests/data/cases/pattern_matching_complex.py
@@ -83,7 +83,7 @@
match x:
case [0]:
y = 0
- case [1, 0] if (x := x[:0]):
+ case [1, 0] if x := x[:0]:
y = 1
case [1, 0]:
y = 2
diff --git a/tests/data/cases/pattern_matching_with_if_stmt.py b/tests/data/cases/pattern_matching_with_if_stmt.py
new file mode 100644
index 00000000000..ff54af91771
--- /dev/null
+++ b/tests/data/cases/pattern_matching_with_if_stmt.py
@@ -0,0 +1,72 @@
+# flags: --preview --minimum-version=3.10
+match match:
+ case "test" if case != "not very loooooooooooooog condition": # comment
+ pass
+
+match smth:
+ case "test" if "any long condition" != "another long condition" and "this is a long condition":
+ pass
+ case test if "any long condition" != "another long condition" and "this is a looooong condition":
+ pass
+ case test if "any long condition" != "another long condition" and "this is a looooong condition": # some additional comments
+ pass
+ case test if (True): # some comment
+ pass
+ case test if (False
+ ): # some comment
+ pass
+ case test if (True # some comment
+ ):
+ pass # some comment
+ case cases if (True # some comment
+ ): # some other comment
+ pass # some comment
+ case match if (True # some comment
+ ):
+ pass # some comment
+
+# case black_test_patma_052 (originally in the pattern_matching_complex test case)
+match x:
+ case [1, 0] if x := x[:0]:
+ y = 1
+ case [1, 0] if (x := x[:0]):
+ y = 1
+
+# output
+
+match match:
+ case "test" if case != "not very loooooooooooooog condition": # comment
+ pass
+
+match smth:
+ case "test" if (
+ "any long condition" != "another long condition" and "this is a long condition"
+ ):
+ pass
+ case test if (
+ "any long condition" != "another long condition"
+ and "this is a looooong condition"
+ ):
+ pass
+ case test if (
+ "any long condition" != "another long condition"
+ and "this is a looooong condition"
+ ): # some additional comments
+ pass
+ case test if True: # some comment
+ pass
+ case test if False: # some comment
+ pass
+ case test if True: # some comment
+ pass # some comment
+ case cases if True: # some comment # some other comment
+ pass # some comment
+ case match if True: # some comment
+ pass # some comment
+
+# case black_test_patma_052 (originally in the pattern_matching_complex test case)
+match x:
+ case [1, 0] if x := x[:0]:
+ y = 1
+ case [1, 0] if x := x[:0]:
+ y = 1
| <!-- Hello! Thanks for submitting a PR. To help make things go a bit more
smoothly we would appreciate that you go through this template. -->
### Description
Fixes #3793.
Now the `if` clauses in the `case-match` blocks will be correctly wrapped with parenthesis.
The previous problem of having redundant parentheses being added to `case` statement when it's accompanied by an `if` clause that was too long, like the example mentioned in #3793 discussion, is also resolved.
### Checklist - did you ...
- [x] Add an entry in `CHANGES.md` if necessary?
- [x] Add / update tests if necessary?
- [ ] Add new / update outdated documentation? | https://api.github.com/repos/psf/black/pulls/4269 | 2024-03-09T01:22:52Z | 2024-03-16T14:38:07Z | 2024-03-16T14:38:07Z | 2024-03-16T14:38:08Z | 1,664 | psf/black | 23,950 |
Add description to chain pattern | diff --git a/behavioral/chain.py b/behavioral/chain.py
index 989a3072..21c5d88c 100644
--- a/behavioral/chain.py
+++ b/behavioral/chain.py
@@ -2,7 +2,22 @@
# -*- coding: utf-8 -*-
"""
+*What is this pattern about?
+This pattern aims to decouple the senders of a request from its
+receivers. It does this by allowing a request to move through chained
+objects until it is handled by an appropriate receiver.
+
+This is useful as it reduces the number of connections between objects,
+since the sender does not need explicit knowledge of the handler, and
+the receiver won't need to refer to all potential receivers, but keeps
+a reference to a single successor.
+
+*References:
http://www.dabeaz.com/coroutines/
+
+*TL;DR80
+Allow a request to pass down a chain of objects until an object handles
+the request.
"""
from contextlib import contextmanager
| This PR adds a description and a TL;DR to the chain pattern. | https://api.github.com/repos/faif/python-patterns/pulls/227 | 2018-06-10T23:04:43Z | 2018-06-11T17:26:48Z | 2018-06-11T17:26:48Z | 2018-06-11T20:32:32Z | 231 | faif/python-patterns | 33,639 |
Move status message | diff --git a/tools/finish_release.py b/tools/finish_release.py
index 5113a76b8dd..bc8e832dfb7 100755
--- a/tools/finish_release.py
+++ b/tools/finish_release.py
@@ -166,6 +166,9 @@ def promote_snaps(version):
assert_logged_into_snapcraft()
for snap in SNAPS:
revisions = get_snap_revisions(snap, version)
+ # The loop below is kind of slow, so let's print some output about what
+ # it is doing.
+ print('Releasing', snap, 'snaps to the stable channel')
for revision in revisions:
cmd = ['snapcraft', 'release', snap, revision, 'stable']
try:
@@ -175,9 +178,6 @@ def promote_snaps(version):
print("The output printed to stdout was:")
print(e.stdout)
raise
- # This loop is kind of slow, so let's print some output about what it
- # is doing.
- print('Successfully released', snap, 'to the stable channel.')
def main(args):
| I just ran this script for the first time through with all of our DNS plugins and I found its output a little misleading. For each snap, I essentially saw it print a message lIke:
```
Getting revision numbers for certbot 1.9.0
```
wait quite a while and then print
```
Successfully released certbot to the stable channel.
```
almost immediately followed by
```
Getting revision numbers for certbot-dns-sakuracloud 1.9.0
```
This timing suggests to me that what is taking so long is getting revision numbers, but the real time is spent calling `snapcraft release` 3 times to publish it for all architectures.
I think adding a status message before calling `snapcraft release` clarifies this and with the "Getting revision numbers message", I think the message after publishing changes isn't needed and is unnecessarily verbose since they are printed one right after the other.
Remember you should feel free to test this out by just running the script and deleting the draft GitHub release as described [here](https://github.com/certbot/certbot/pull/8351#issue-498172023) (or just commenting out the call to `create_github_release` in `main` which is what I have been doing). | https://api.github.com/repos/certbot/certbot/pulls/8361 | 2020-10-08T22:47:51Z | 2020-10-08T23:38:06Z | 2020-10-08T23:38:06Z | 2020-10-08T23:38:09Z | 246 | certbot/certbot | 2,238 |
[niconico] fix a small mistake of thumbnail extraction | diff --git a/yt_dlp/extractor/niconico.py b/yt_dlp/extractor/niconico.py
index 126aa4530c3..1c15dca71ca 100644
--- a/yt_dlp/extractor/niconico.py
+++ b/yt_dlp/extractor/niconico.py
@@ -511,7 +511,7 @@ def get_video_info_xml(items):
thumbnail = (
self._html_search_regex(r'<meta property="og:image" content="([^"]+)">', webpage, 'thumbnail data', default=None)
- or try_get( # choose highest from 720p to 240p
+ or dict_get( # choose highest from 720p to 240p
get_video_info_web('thumbnail'),
['ogp', 'player', 'largeUrl', 'middleUrl', 'url'])
or self._html_search_meta('image', webpage, 'thumbnail', default=None)
| ### In order to be accepted and merged into youtube-dl, each piece of code must be in public domain or released under [Unlicense](http://unlicense.org/). Check one of the following options:
- [x] I am the original author of this code and I am willing to release it under [Unlicense](http://unlicense.org/)
- [ ] I am not the original author of this code but it is in public domain or released under [Unlicense](http://unlicense.org/) (provide reliable evidence)
### What is the purpose of your *pull request*?
- [x] Bug fix
- [ ] Improvement
- [ ] New extractor
- [ ] New feature
### Description of your *pull request* and other information
Thumbnails are still not downloading, but it's my fault. I realized `try_get()` only accept a list of lambda, so using `dict_get()` now. This time I have tested my PR to make sure things work.
| https://api.github.com/repos/yt-dlp/yt-dlp/pulls/289 | 2021-04-30T16:05:07Z | 2021-05-01T14:05:47Z | 2021-05-01T14:05:47Z | 2021-05-01T15:34:50Z | 211 | yt-dlp/yt-dlp | 7,819 |
Support CJK parameters when post files | diff --git a/AUTHORS.rst b/AUTHORS.rst
index 1d66c4bf70..9a7f25d5e8 100644
--- a/AUTHORS.rst
+++ b/AUTHORS.rst
@@ -114,3 +114,4 @@ Patches and Suggestions
- Ian Cordasco <graffatcolmingov@gmail.com> @sigmavirus24
- Rhys Elsmore
- André Graf (dergraf)
+- Stephen Zhuang (everbird)
diff --git a/requests/models.py b/requests/models.py
index 78311491cd..f34b6f766c 100644
--- a/requests/models.py
+++ b/requests/models.py
@@ -360,9 +360,9 @@ def _encode_files(self, files):
for field, val in fields:
if isinstance(val, list):
for v in val:
- new_fields.append((field, str(v)))
+ new_fields.append((field, builtin_str(v)))
else:
- new_fields.append((field, str(val)))
+ new_fields.append((field, builtin_str(val)))
for (k, v) in files:
# support for explicit filename
diff --git a/tests/test_requests.py b/tests/test_requests.py
index 3d6f49c293..6726de8e42 100755
--- a/tests/test_requests.py
+++ b/tests/test_requests.py
@@ -336,12 +336,23 @@ def test_POSTBIN_GET_POST_FILES_WITH_PARAMS(self):
with open(__file__) as f:
url = service('post')
- post1 = post(url,
- files={'some': f},
- data={'some': 'data'})
+ post1 = post(url, data={'some': 'data'}, files={'some': f})
post2 = post(url, data={'some': 'data'}, files=[('some', f)])
- post3 = post(url, data=[('some', 'data')],
- files=[('some', f)])
+ post3 = post(url, data=[('some', 'data')], files=[('some', f)])
+
+ self.assertEqual(post1.status_code, 200)
+ self.assertEqual(post2.status_code, 200)
+ self.assertEqual(post3.status_code, 200)
+
+ def test_POSTBIN_GET_POST_FILES_WITH_CJK_PARAMS(self):
+
+ for service in SERVICES:
+
+ with open(__file__) as f:
+ url = service('post')
+ post1 = post(url, data={'some': '中文'}, files={'some': f})
+ post2 = post(url, data={'some': '日本語'}, files=[('some', f)])
+ post3 = post(url, data=[('some', '한국의')], files=[('some', f)])
self.assertEqual(post1.status_code, 200)
self.assertEqual(post2.status_code, 200)
| When I post files with CJK parameters I got this exception:
```
Traceback (most recent call last):
File "/home/everbird/code/requests/tests/test_requests.py", line 358, in test_POSTBIN_GET_POST_FILES_WITH_CJK_PARAMS
data={'some': '中文'})
File "/home/everbird/code/requests/requests/api.py", line 98, in post
return request('post', url, data=data, **kwargs)
File "/home/everbird/code/requests/requests/safe_mode.py", line 39, in wrapped
return function(method, url, **kwargs)
File "/home/everbird/code/requests/requests/api.py", line 51, in request
return session.request(method=method, url=url, **kwargs)
File "/home/everbird/code/requests/requests/sessions.py", line 241, in request
r.send(prefetch=prefetch)
File "/home/everbird/code/requests/requests/models.py", line 529, in send
(body, content_type) = self._encode_files(self.files)
File "/home/everbird/code/requests/requests/models.py", line 365, in _encode_files
new_fields.append((field, str(val)))
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe4 in position 0: ordinal not in range(128)
```
For py2, `str` is actually `unicode` according to `requests/compat.py`. It is OK like this:
``` python
In [1]: str('a')
Out[1]: 'a'
In [2]: unicode('a')
Out[2]: u'a'
```
but it failed as below:
``` python
In [3]: str('中文')
Out[3]: '\xe4\xb8\xad\xe6\x96\x87'
In [4]: unicode('中文')
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-4-a7400b671605> in <module>()
----> 1 unicode('中文')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe4 in position 0: ordinal not in range(128)
```
In `requests/models.py` at line 363-365 the `str()` seems should be the builtin `str`, not `unicode`. So I wrote a test named `test_POSTBIN_GET_POST_FILES_WITH_CJK_PARAMS` and fixed it.
Hope it helps.
| https://api.github.com/repos/psf/requests/pulls/884 | 2012-10-08T09:21:19Z | 2012-10-17T14:21:40Z | 2012-10-17T14:21:40Z | 2021-09-08T17:05:51Z | 639 | psf/requests | 32,720 |
Send_pictures small fix | diff --git a/extensions/send_pictures/script.py b/extensions/send_pictures/script.py
index 556a88e5c7..9b066d8d97 100644
--- a/extensions/send_pictures/script.py
+++ b/extensions/send_pictures/script.py
@@ -24,7 +24,7 @@ def caption_image(raw_image):
return processor.decode(out[0], skip_special_tokens=True)
def generate_chat_picture(picture, name1, name2):
- text = f'*{name1} sends {name2} a picture that contains the following: "{caption_image(picture)}"*'
+ text = f'*{name1} sends {name2} a picture that contains the following: “{caption_image(picture)}”*'
# lower the resolution of sent images for the chat, otherwise the log size gets out of control quickly with all the base64 values in visible history
picture.thumbnail((300, 300))
buffer = BytesIO()
| Since the `text` variable is just for passing to the model instead of the picture, changed internal regular quotes to [typographical curly quotes](https://typographyforlawyers.com/straight-and-curly-quotes.html).
The model interprets this all the same, yet curly quotes prevent possible HTML & JSON breakage down the line — as it was the case with alt-text 6 lines lower ( `visible_text = f'<img src="data:image/jpeg;base64,{img_str}"` **alt="{text}"**`>` ) | https://api.github.com/repos/oobabooga/text-generation-webui/pulls/546 | 2023-03-24T23:34:15Z | 2023-04-08T04:55:16Z | 2023-04-08T04:55:16Z | 2023-04-10T00:55:50Z | 209 | oobabooga/text-generation-webui | 26,071 |
Added CC0 licence | diff --git a/LICENCE b/LICENCE
new file mode 100644
index 00000000..6ca207ef
--- /dev/null
+++ b/LICENCE
@@ -0,0 +1,122 @@
+Creative Commons Legal Code
+
+CC0 1.0 Universal
+
+ CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
+ LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN
+ ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS
+ INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES
+ REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS
+ PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM
+ THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED
+ HEREUNDER.
+
+Statement of Purpose
+
+The laws of most jurisdictions throughout the world automatically confer
+exclusive Copyright and Related Rights (defined below) upon the creator
+and subsequent owner(s) (each and all, an "owner") of an original work of
+authorship and/or a database (each, a "Work").
+
+Certain owners wish to permanently relinquish those rights to a Work for
+the purpose of contributing to a commons of creative, cultural and
+scientific works ("Commons") that the public can reliably and without fear
+of later claims of infringement build upon, modify, incorporate in other
+works, reuse and redistribute as freely as possible in any form whatsoever
+and for any purposes, including without limitation commercial purposes.
+These owners may contribute to the Commons to promote the ideal of a free
+culture and the further production of creative, cultural and scientific
+works, or to gain reputation or greater distribution for their Work in
+part through the use and efforts of others.
+
+For these and/or other purposes and motivations, and without any
+expectation of additional consideration or compensation, the person
+associating CC0 with a Work (the "Affirmer"), to the extent that he or she
+is an owner of Copyright and Related Rights in the Work, voluntarily
+elects to apply CC0 to the Work and publicly distribute the Work under its
+terms, with knowledge of his or her Copyright and Related Rights in the
+Work and the meaning and intended legal effect of CC0 on those rights.
+
+1. Copyright and Related Rights. A Work made available under CC0 may be
+protected by copyright and related or neighboring rights ("Copyright and
+Related Rights"). Copyright and Related Rights include, but are not
+limited to, the following:
+
+ i. the right to reproduce, adapt, distribute, perform, display,
+ communicate, and translate a Work;
+ ii. moral rights retained by the original author(s) and/or performer(s);
+iii. publicity and privacy rights pertaining to a person's image or
+ likeness depicted in a Work;
+ iv. rights protecting against unfair competition in regards to a Work,
+ subject to the limitations in paragraph 4(a), below;
+ v. rights protecting the extraction, dissemination, use and reuse of data
+ in a Work;
+ vi. database rights (such as those arising under Directive 96/9/EC of the
+ European Parliament and of the Council of 11 March 1996 on the legal
+ protection of databases, and under any national implementation
+ thereof, including any amended or successor version of such
+ directive); and
+vii. other similar, equivalent or corresponding rights throughout the
+ world based on applicable law or treaty, and any national
+ implementations thereof.
+
+2. Waiver. To the greatest extent permitted by, but not in contravention
+of, applicable law, Affirmer hereby overtly, fully, permanently,
+irrevocably and unconditionally waives, abandons, and surrenders all of
+Affirmer's Copyright and Related Rights and associated claims and causes
+of action, whether now known or unknown (including existing as well as
+future claims and causes of action), in the Work (i) in all territories
+worldwide, (ii) for the maximum duration provided by applicable law or
+treaty (including future time extensions), (iii) in any current or future
+medium and for any number of copies, and (iv) for any purpose whatsoever,
+including without limitation commercial, advertising or promotional
+purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each
+member of the public at large and to the detriment of Affirmer's heirs and
+successors, fully intending that such Waiver shall not be subject to
+revocation, rescission, cancellation, termination, or any other legal or
+equitable action to disrupt the quiet enjoyment of the Work by the public
+as contemplated by Affirmer's express Statement of Purpose.
+
+3. Public License Fallback. Should any part of the Waiver for any reason
+be judged legally invalid or ineffective under applicable law, then the
+Waiver shall be preserved to the maximum extent permitted taking into
+account Affirmer's express Statement of Purpose. In addition, to the
+extent the Waiver is so judged Affirmer hereby grants to each affected
+person a royalty-free, non transferable, non sublicensable, non exclusive,
+irrevocable and unconditional license to exercise Affirmer's Copyright and
+Related Rights in the Work (i) in all territories worldwide, (ii) for the
+maximum duration provided by applicable law or treaty (including future
+time extensions), (iii) in any current or future medium and for any number
+of copies, and (iv) for any purpose whatsoever, including without
+limitation commercial, advertising or promotional purposes (the
+"License"). The License shall be deemed effective as of the date CC0 was
+applied by Affirmer to the Work. Should any part of the License for any
+reason be judged legally invalid or ineffective under applicable law, such
+partial invalidity or ineffectiveness shall not invalidate the remainder
+of the License, and in such case Affirmer hereby affirms that he or she
+will not (i) exercise any of his or her remaining Copyright and Related
+Rights in the Work or (ii) assert any associated claims and causes of
+action with respect to the Work, in either case contrary to Affirmer's
+express Statement of Purpose.
+
+4. Limitations and Disclaimers.
+
+ a. No trademark or patent rights held by Affirmer are waived, abandoned,
+ surrendered, licensed or otherwise affected by this document.
+ b. Affirmer offers the Work as-is and makes no representations or
+ warranties of any kind concerning the Work, express, implied,
+ statutory or otherwise, including without limitation warranties of
+ title, merchantability, fitness for a particular purpose, non
+ infringement, or the absence of latent or other defects, accuracy, or
+ the present or absence of errors, whether or not discoverable, all to
+ the greatest extent permissible under applicable law.
+ c. Affirmer disclaims responsibility for clearing rights of other persons
+ that may apply to the Work or any use thereof, including without
+ limitation any person's Copyright and Related Rights in the Work.
+ Further, Affirmer disclaims responsibility for obtaining any necessary
+ consents, permissions or other rights required for any use of the
+ Work.
+ d. Affirmer understands and acknowledges that Creative Commons is not a
+ party to this document and has no duty or obligation with respect to
+ this CC0 or use of the Work.
+
| Resolves #121
| https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/193 | 2015-10-24T12:04:36Z | 2015-10-24T20:20:37Z | 2015-10-24T20:20:37Z | 2015-10-24T20:20:41Z | 1,677 | josephmisiti/awesome-machine-learning | 52,258 |
change: import of unittest supports python2 and python3 | diff --git a/test_command.py b/test_command.py
index 0f5bbcf6..d3b42f6c 100644
--- a/test_command.py
+++ b/test_command.py
@@ -1,6 +1,10 @@
from command import MoveFileCommand
-import unittest, os, shutil, subprocess
+import os, shutil, subprocess, sys
+if sys.version_info < (2, 7):
+ import unittest2 as unittest
+else:
+ import unittest
class CommandTest(unittest.TestCase):
diff --git a/test_state.py b/test_state.py
new file mode 100644
index 00000000..079602b0
--- /dev/null
+++ b/test_state.py
@@ -0,0 +1,62 @@
+from state import Radio
+import sys
+
+if sys.version_info < (2, 7):
+ import unittest2 as unittest
+else:
+ import unittest
+
+class RadioTest(unittest.TestCase):
+ """
+ Attention: Test case results depend on test case execution. The test cases
+ in this integration test class should be executed in an explicit order:
+ http://stackoverflow.com/questions/5387299/python-unittest-testcase-execution-order
+ """
+
+ @classmethod
+ def setUpClass(self):
+ self.radio = Radio()
+
+ def test_initial_state(self):
+ state = self.radio.state.name
+ expected_state_name = 'AM'
+ self.assertEqual(state, expected_state_name)
+
+ def test_initial_am_station(self):
+ station = self.radio.state.stations[self.radio.state.pos]
+ expected_station = '1250'
+ self.assertEqual(station, expected_station)
+
+ def test_2nd_am_station_after_scan(self):
+ self.radio.scan()
+ station = self.radio.state.stations[self.radio.state.pos]
+ expected_station = '1380'
+ self.assertEqual(station, expected_station)
+
+ def test_3rd_am_station_after_scan(self):
+ self.radio.scan()
+ station = self.radio.state.stations[self.radio.state.pos]
+ expected_station = '1510'
+ self.assertEqual(station, expected_station)
+
+ def test_am_station_overflow_after_scan(self):
+ self.radio.scan()
+ station = self.radio.state.stations[self.radio.state.pos]
+ expected_station = '1250'
+ self.assertEqual(station, expected_station)
+
+ def test_shall_toggle_from_am_to_fm(self):
+ self.radio.toggle_amfm()
+ state = self.radio.state.name
+ expected_state_name = 'FM'
+ self.assertEqual(state, expected_state_name)
+
+ def test_shall_toggle_from_fm_to_am(self):
+ self.radio.toggle_amfm()
+ state = self.radio.state.name
+ expected_state_name = 'AM'
+ self.assertEqual(state, expected_state_name)
+
+if __name__ == "__main__":
+ unittest.main()
+
diff --git a/test_strategy.py b/test_strategy.py
index 7e0953d5..a5f3ee3e 100644
--- a/test_strategy.py
+++ b/test_strategy.py
@@ -2,8 +2,12 @@
Tests for strategy.py
"""
-import unittest
-import subprocess
+import subprocess, sys
+
+if sys.version_info < (2, 7):
+ import unittest2 as unittest
+else:
+ import unittest
class StrategyTest(unittest.TestCase):
| - import of unit test should support python2 and python3 now
- integration test for state.py addedd
| https://api.github.com/repos/faif/python-patterns/pulls/116 | 2016-02-10T20:21:13Z | 2016-02-12T19:47:45Z | 2016-02-12T19:47:45Z | 2016-02-12T19:47:45Z | 750 | faif/python-patterns | 33,560 |
add omit_zero to binance | diff --git a/js/base/functions/number.js b/js/base/functions/number.js
index cfd6e7db373d..6bc54ecc0526 100644
--- a/js/base/functions/number.js
+++ b/js/base/functions/number.js
@@ -314,6 +314,16 @@ function toWei (amount, decimals = 18) {
return numberToString (Math.floor (parseFloat (n + 'e' + newExponent))) // wei must be whole numbers
}
+function omitZero (stringNumber) {
+ if (stringNumber === undefined) {
+ return undefined
+ }
+ if (parseFloat (stringNumber) === 0) {
+ return undefined
+ }
+ return stringNumber
+}
+
/* ------------------------------------------------------------------------ */
module.exports = {
@@ -324,6 +334,7 @@ module.exports = {
decimalToPrecision,
truncate_to_string,
truncate,
+ omitZero,
precisionConstants,
ROUND,
TRUNCATE,
diff --git a/js/binance.js b/js/binance.js
index 89d6b1f20818..fd86981ea8c2 100644
--- a/js/binance.js
+++ b/js/binance.js
@@ -1845,9 +1845,9 @@ module.exports = class binance extends Exchange {
const status = this.parseOrderStatus (this.safeString (order, 'status'));
const marketId = this.safeString (order, 'symbol');
const symbol = this.safeSymbol (marketId, market);
- const filled = this.safeNumber (order, 'executedQty');
- // using safeFloat here until we add comparisons to Precise
- const floatFilled = this.safeFloat (order, 'executedQty');
+ const filledString = this.safeString (order, 'executedQty', '0');
+ const filled = this.parseNumber (filledString);
+ const filledFloat = parseFloat (filledString);
let timestamp = undefined;
let lastTradeTimestamp = undefined;
if ('time' in order) {
@@ -1856,20 +1856,17 @@ module.exports = class binance extends Exchange {
timestamp = this.safeInteger (order, 'transactTime');
} else if ('updateTime' in order) {
if (status === 'open') {
- if (floatFilled > 0) {
+ if (filledFloat > 0) {
lastTradeTimestamp = this.safeInteger (order, 'updateTime');
} else {
timestamp = this.safeInteger (order, 'updateTime');
}
}
}
- const average = this.safeNumber (order, 'avgPrice');
- let price = this.safeNumber (order, 'price');
- // using safeFloat here until we add comparisons to Precise
- const floatPrice = this.safeFloat (order, 'price');
- if (floatPrice <= 0) {
- price = undefined;
- }
+ const averageString = this.safeString (order, 'avgPrice');
+ const average = this.parseNumber (this.omitZero (averageString));
+ const priceString = this.safeString (order, 'price');
+ const price = this.parseNumber (this.omitZero (priceString));
const amount = this.safeNumber (order, 'origQty');
// - Spot/Margin market: cummulativeQuoteQty
// - Futures market: cumQuote.
@@ -1886,7 +1883,8 @@ module.exports = class binance extends Exchange {
const clientOrderId = this.safeString (order, 'clientOrderId');
const timeInForce = this.safeString (order, 'timeInForce');
const postOnly = (type === 'limit_maker') || (timeInForce === 'GTX');
- const stopPrice = this.safeNumber (order, 'stopPrice');
+ const stopPriceString = this.safeString (order, 'stopPrice');
+ const stopPrice = this.parseNumber (this.omitZero (stopPriceString));
return this.safeOrder ({
'info': order,
'id': id,
diff --git a/php/Exchange.php b/php/Exchange.php
index 3a356af7b007..514296991cc6 100644
--- a/php/Exchange.php
+++ b/php/Exchange.php
@@ -224,6 +224,7 @@ class Exchange {
'numberToString' => 'number_to_string',
'precisionFromString' => 'precision_from_string',
'decimalToPrecision' => 'decimal_to_precision',
+ 'omitZero' => 'omit_zero',
'isJsonEncodedObject' => 'is_json_encoded_object',
'stringToBinary' => 'string_to_binary',
'stringToBase64' => 'string_to_base64',
@@ -3071,4 +3072,14 @@ public function parse_precision($precision) {
}
return '1e' . Precise::string_neg($precision);
}
+
+ public function omit_zero($string_number) {
+ if ($string_number === null) {
+ return null;
+ }
+ if (floatval($string_number) === null) {
+ return null;
+ }
+ return $string_number;
+ }
}
diff --git a/python/ccxt/base/exchange.py b/python/ccxt/base/exchange.py
index 5949cd733570..f2a895dda60d 100644
--- a/python/ccxt/base/exchange.py
+++ b/python/ccxt/base/exchange.py
@@ -2321,3 +2321,10 @@ def parse_precision(self, precision):
if precision is None:
return None
return '1e' + Precise.string_neg(precision)
+
+ def omit_zero(self, string_number):
+ if string_number is None:
+ return None
+ if float(string_number) == 0:
+ return None
+ return string_number
| https://api.github.com/repos/ccxt/ccxt/pulls/9283 | 2021-05-28T10:38:03Z | 2021-05-28T20:51:09Z | 2021-05-28T20:51:09Z | 2021-05-28T20:51:09Z | 1,287 | ccxt/ccxt | 13,577 |
|
Add Saleor to e-commerce section | diff --git a/README.md b/README.md
index ec698d1cf..79c4dacaa 100644
--- a/README.md
+++ b/README.md
@@ -503,6 +503,7 @@ Inspired by [awesome-php](https://github.com/ziadoz/awesome-php).
* [money](https://github.com/carlospalol/money) - Money class with optional CLDR-backed locale-aware formatting and an extensible currency exchange solution.
* [python-currencies](https://github.com/Alir3z4/python-currencies) - Display money format and its filthy currencies.
* [forex-python](https://github.com/MicroPyramid/forex-python) - Foreign exchange rates, Bitcoin price index and currency conversion.
+* [saleor](http://getsaleor.com/) - An e-commerce storefront for Django.
* [shoop](https://www.shuup.com/en/) - An open source E-Commerce platform based on Django.
## Editor Plugins and IDEs
| ## What is this Python project?
Saleor is one of the [most popular](https://www.ecommwar.com/) Python e-commerce platforms.
## What's the difference between this Python project and similar ones?
Saleor is designed to be a forkable project template rather than a library that you add to an existing project.
--
Anyone who agrees with this pull request could vote for it by adding a :+1: to it, and usually, the maintainer will merge it when votes reach **20**.
| https://api.github.com/repos/vinta/awesome-python/pulls/978 | 2017-11-20T10:31:42Z | 2017-11-20T16:51:19Z | 2017-11-20T16:51:19Z | 2017-11-20T16:51:19Z | 219 | vinta/awesome-python | 27,243 |
REGR: groupby fails with nullable dtypes and dropna=False | diff --git a/doc/source/whatsnew/v1.5.1.rst b/doc/source/whatsnew/v1.5.1.rst
index f58e10a701740..ab80bdb9ed782 100644
--- a/doc/source/whatsnew/v1.5.1.rst
+++ b/doc/source/whatsnew/v1.5.1.rst
@@ -83,7 +83,7 @@ Fixed regressions
- Fixed :meth:`.DataFrameGroupBy.size` not returning a Series when ``axis=1`` (:issue:`48738`)
- Fixed Regression in :meth:`DataFrameGroupBy.apply` when user defined function is called on an empty dataframe (:issue:`47985`)
- Fixed regression in :meth:`DataFrame.apply` when passing non-zero ``axis`` via keyword argument (:issue:`48656`)
--
+- Fixed regression in :meth:`Series.groupby` and :meth:`DataFrame.groupby` when the grouper is a nullable data type (e.g. :class:`Int64`) or a PyArrow-backed string array, contains null values, and ``dropna=False`` (:issue:`48794`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 31b34e557531c..f6f933b1b9917 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -619,8 +619,8 @@ def factorize(
na_mask = indices.values == -1
na_index = na_mask.argmax()
if na_mask[na_index]:
- uniques = uniques.insert(na_index, self.dtype.na_value)
- na_code = 0 if na_index == 0 else indices[:na_index].argmax() + 1
+ na_code = 0 if na_index == 0 else indices[:na_index].max() + 1
+ uniques = uniques.insert(na_code, self.dtype.na_value)
indices[indices >= na_code] += 1
indices[indices == -1] = na_code
else:
diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py
index 043e0baf3ec0e..34ca205f7709a 100644
--- a/pandas/core/arrays/masked.py
+++ b/pandas/core/arrays/masked.py
@@ -913,7 +913,7 @@ def factorize(
else:
# mypy error: Slice index must be an integer or None
# https://github.com/python/mypy/issues/2410
- na_code = codes[:na_index].argmax() + 1 # type: ignore[misc]
+ na_code = codes[:na_index].max() + 1 # type: ignore[misc]
codes[codes >= na_code] += 1
codes[codes == -1] = na_code
# dummy value for uniques; not used since uniques_mask will be True
diff --git a/pandas/tests/groupby/test_groupby_dropna.py b/pandas/tests/groupby/test_groupby_dropna.py
index 360e3096ceb63..ee660dd073ce9 100644
--- a/pandas/tests/groupby/test_groupby_dropna.py
+++ b/pandas/tests/groupby/test_groupby_dropna.py
@@ -393,74 +393,91 @@ def test_groupby_drop_nan_with_multi_index():
tm.assert_frame_equal(result, expected)
+# sequence_index enumerates all strings made up of x, y, z of length 4
+@pytest.mark.parametrize("sequence_index", range(3**4))
@pytest.mark.parametrize(
- "values, dtype",
+ "dtype",
[
- ([2, np.nan, 1, 2], None),
- ([2, np.nan, 1, 2], "UInt8"),
- ([2, np.nan, 1, 2], "Int8"),
- ([2, np.nan, 1, 2], "UInt16"),
- ([2, np.nan, 1, 2], "Int16"),
- ([2, np.nan, 1, 2], "UInt32"),
- ([2, np.nan, 1, 2], "Int32"),
- ([2, np.nan, 1, 2], "UInt64"),
- ([2, np.nan, 1, 2], "Int64"),
- ([2, np.nan, 1, 2], "Float32"),
- ([2, np.nan, 1, 2], "Int64"),
- ([2, np.nan, 1, 2], "Float64"),
+ None,
+ "UInt8",
+ "Int8",
+ "UInt16",
+ "Int16",
+ "UInt32",
+ "Int32",
+ "UInt64",
+ "Int64",
+ "Float32",
+ "Int64",
+ "Float64",
+ "category",
+ "string",
pytest.param(
- ["y", None, "x", "y"],
- "category",
- marks=pytest.mark.xfail(
- reason="dropna=False not correct for categorical, GH#48645"
- ),
- ),
- (["y", pd.NA, "x", "y"], "string"),
- pytest.param(
- ["y", pd.NA, "x", "y"],
"string[pyarrow]",
marks=pytest.mark.skipif(
pa_version_under1p01, reason="pyarrow is not installed"
),
),
- (
- ["2016-01-01", np.datetime64("NaT"), "2017-01-01", "2016-01-01"],
- "datetime64[ns]",
- ),
- (
- [
- pd.Period("2012-02-01", freq="D"),
- pd.NaT,
- pd.Period("2012-01-01", freq="D"),
- pd.Period("2012-02-01", freq="D"),
- ],
- None,
- ),
- (pd.arrays.SparseArray([2, np.nan, 1, 2]), None),
+ "datetime64[ns]",
+ "period[d]",
+ "Sparse[float]",
],
)
@pytest.mark.parametrize("test_series", [True, False])
-def test_no_sort_keep_na(values, dtype, test_series):
- # GH#46584
- key = pd.Series(values, dtype=dtype)
- df = pd.DataFrame({"key": key, "a": [1, 2, 3, 4]})
+def test_no_sort_keep_na(request, sequence_index, dtype, test_series):
+ # GH#46584, GH#48794
+
+ # Convert sequence_index into a string sequence, e.g. 5 becomes "xxyz"
+ # This sequence is used for the grouper.
+ sequence = "".join(
+ [{0: "x", 1: "y", 2: "z"}[sequence_index // (3**k) % 3] for k in range(4)]
+ )
+
+ if dtype == "category" and "z" in sequence:
+ # Only xfail when nulls are present
+ msg = "dropna=False not correct for categorical, GH#48645"
+ request.node.add_marker(pytest.mark.xfail(reason=msg))
+
+ # Unique values to use for grouper, depends on dtype
+ if dtype in ("string", "string[pyarrow]"):
+ uniques = {"x": "x", "y": "y", "z": pd.NA}
+ elif dtype in ("datetime64[ns]", "period[d]"):
+ uniques = {"x": "2016-01-01", "y": "2017-01-01", "z": pd.NA}
+ else:
+ uniques = {"x": 1, "y": 2, "z": np.nan}
+
+ df = pd.DataFrame(
+ {
+ "key": pd.Series([uniques[label] for label in sequence], dtype=dtype),
+ "a": [0, 1, 2, 3],
+ }
+ )
gb = df.groupby("key", dropna=False, sort=False)
if test_series:
gb = gb["a"]
+ result = gb.sum()
- warn = None
- if isinstance(values, pd.arrays.SparseArray):
- warn = FutureWarning
- msg = "passing a SparseArray to pd.Index will store that array directly"
- with tm.assert_produces_warning(warn, match=msg):
- result = gb.sum()
- expected = pd.DataFrame({"a": [5, 2, 3]}, index=key[:-1].rename("key"))
+ # Manually compute the groupby sum, use the labels "x", "y", and "z" to avoid
+ # issues with hashing np.nan
+ summed = {}
+ for idx, label in enumerate(sequence):
+ summed[label] = summed.get(label, 0) + idx
+ if dtype == "category":
+ index = pd.CategoricalIndex(
+ [uniques[e] for e in summed],
+ list({uniques[k]: 0 for k in sequence if not pd.isnull(uniques[k])}),
+ name="key",
+ )
+ elif isinstance(dtype, str) and dtype.startswith("Sparse"):
+ index = pd.Index(
+ pd.array([uniques[label] for label in summed], dtype=dtype), name="key"
+ )
+ else:
+ index = pd.Index([uniques[label] for label in summed], dtype=dtype, name="key")
+ expected = pd.Series(summed.values(), index=index, name="a", dtype=None)
+ if not test_series:
+ expected = expected.to_frame()
- if test_series:
- expected = expected["a"]
- if expected.index.is_categorical():
- # TODO: Slicing reorders categories?
- expected.index = expected.index.reorder_categories(["y", "x"])
tm.assert_equal(result, expected)
| - [x] closes #48794 (Replace xxxx with the Github issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
This reworked test is a bit ghastly, any suggestions on how to simplify or clean up are most welcome. | https://api.github.com/repos/pandas-dev/pandas/pulls/48824 | 2022-09-28T01:05:12Z | 2022-10-04T18:28:21Z | 2022-10-04T18:28:21Z | 2022-10-04T22:16:20Z | 2,286 | pandas-dev/pandas | 44,998 |
fix(workflow): Add default for `alert-release-notification-workflow` | diff --git a/src/sentry/conf/server.py b/src/sentry/conf/server.py
index 5dcf62f231abc8..9443c2649da318 100644
--- a/src/sentry/conf/server.py
+++ b/src/sentry/conf/server.py
@@ -927,6 +927,8 @@ def create_partitioned_queues(name):
"organizations:advanced-search": True,
# Use metrics as the dataset for crash free metric alerts
"organizations:alert-crash-free-metrics": False,
+ # Workflow 2.0 notifications following a release
+ "organizations:alert-release-notification-workflow": False,
# Alert wizard redesign version 3
"organizations:alert-wizard-v3": False,
"organizations:api-keys": False,
| follow up to #35806 | https://api.github.com/repos/getsentry/sentry/pulls/35870 | 2022-06-21T22:02:54Z | 2022-06-21T22:27:46Z | 2022-06-21T22:27:46Z | 2024-03-05T19:40:16Z | 167 | getsentry/sentry | 44,029 |
Implement Differential Diffusion | diff --git a/comfy/samplers.py b/comfy/samplers.py
index c795f208d8..a4a5595117 100644
--- a/comfy/samplers.py
+++ b/comfy/samplers.py
@@ -276,6 +276,8 @@ def __init__(self, model):
self.inner_model = model
def forward(self, x, sigma, uncond, cond, cond_scale, denoise_mask, model_options={}, seed=None):
if denoise_mask is not None:
+ if "denoise_mask_function" in model_options:
+ denoise_mask = model_options["denoise_mask_function"](sigma, denoise_mask)
latent_mask = 1. - denoise_mask
x = x * denoise_mask + (self.latent_image + self.noise * sigma.reshape([sigma.shape[0]] + [1] * (len(self.noise.shape) - 1))) * latent_mask
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
diff --git a/comfy_extras/nodes_differential_diffusion.py b/comfy_extras/nodes_differential_diffusion.py
new file mode 100644
index 0000000000..48c95602ff
--- /dev/null
+++ b/comfy_extras/nodes_differential_diffusion.py
@@ -0,0 +1,97 @@
+# code adapted from https://github.com/exx8/differential-diffusion
+
+import torch
+import inspect
+
+class DifferentialDiffusion():
+ @classmethod
+ def INPUT_TYPES(s):
+ return {"required": {"model": ("MODEL", ),
+ }}
+ RETURN_TYPES = ("MODEL",)
+ FUNCTION = "apply"
+ CATEGORY = "_for_testing"
+ INIT = False
+
+ @classmethod
+ def IS_CHANGED(s, *args, **kwargs):
+ DifferentialDiffusion.INIT = s.INIT = True
+ return ""
+
+ def __init__(self) -> None:
+ DifferentialDiffusion.INIT = False
+ self.sigmas: torch.Tensor = None
+ self.thresholds: torch.Tensor = None
+ self.mask_i = None
+ self.valid_sigmas = False
+ self.varying_sigmas_samplers = ["dpmpp_2s", "dpmpp_sde", "dpm_2", "heun", "restart"]
+
+ def apply(self, model):
+ model = model.clone()
+ model.model_options["denoise_mask_function"] = self.forward
+ return (model,)
+
+ def init_sigmas(self, sigma: torch.Tensor, denoise_mask: torch.Tensor):
+ self.__init__()
+ self.sigmas, sampler = find_outer_instance("sigmas", callback=get_sigmas_and_sampler) or (None, "")
+ self.valid_sigmas = not ("sample_" not in sampler or any(s in sampler for s in self.varying_sigmas_samplers)) or "generic" in sampler
+ if self.sigmas is None:
+ self.sigmas = sigma[:1].repeat(2)
+ self.sigmas[-1].zero_()
+ self.sigmas_min = self.sigmas.min()
+ self.sigmas_max = self.sigmas.max()
+ self.thresholds = torch.linspace(1, 0, self.sigmas.shape[0], dtype=sigma.dtype, device=sigma.device)
+ self.thresholds_min_len = self.thresholds.shape[0] - 1
+ if self.valid_sigmas:
+ thresholds = self.thresholds[:-1].reshape(-1, 1, 1, 1, 1)
+ mask = denoise_mask.unsqueeze(0)
+ mask = (mask >= thresholds).to(denoise_mask.dtype)
+ self.mask_i = iter(mask)
+
+ def forward(self, sigma: torch.Tensor, denoise_mask: torch.Tensor):
+ if self.sigmas is None or DifferentialDiffusion.INIT:
+ self.init_sigmas(sigma, denoise_mask)
+ if self.valid_sigmas:
+ try:
+ return next(self.mask_i)
+ except StopIteration:
+ self.valid_sigmas = False
+ if self.thresholds_min_len > 1:
+ nearest_idx = (self.sigmas - sigma[0]).abs().argmin()
+ if not self.thresholds_min_len > nearest_idx:
+ nearest_idx = -2
+ threshold = self.thresholds[nearest_idx]
+ else:
+ threshold = (sigma[0] - self.sigmas_min) / (self.sigmas_max - self.sigmas_min)
+ return (denoise_mask >= threshold).to(denoise_mask.dtype)
+
+def get_sigmas_and_sampler(frame, target):
+ found = frame.f_locals[target]
+ if isinstance(found, torch.Tensor) and found[-1] < 0.1:
+ return found, frame.f_code.co_name
+ return False
+
+def find_outer_instance(target: str, target_type=None, callback=None):
+ frame = inspect.currentframe()
+ i = 0
+ while frame and i < 100:
+ if target in frame.f_locals:
+ if callback is not None:
+ res = callback(frame, target)
+ if res:
+ return res
+ else:
+ found = frame.f_locals[target]
+ if isinstance(found, target_type):
+ return found
+ frame = frame.f_back
+ i += 1
+ return None
+
+
+NODE_CLASS_MAPPINGS = {
+ "DifferentialDiffusion": DifferentialDiffusion,
+}
+NODE_DISPLAY_NAME_MAPPINGS = {
+ "DifferentialDiffusion": "Differential Diffusion",
+}
diff --git a/nodes.py b/nodes.py
index a577c21262..b759d22cef 100644
--- a/nodes.py
+++ b/nodes.py
@@ -1961,6 +1961,7 @@ def init_custom_nodes():
"nodes_photomaker.py",
"nodes_cond.py",
"nodes_stable_cascade.py",
+ "nodes_differential_diffusion.py",
]
for node_file in extras_files:
| The authors of the paper introduced a concept called a _change map_. For brevity, I will refer to it here as a _mask_.
The node `_for_testing/Differential Diffusion` is meant to be used alongside the `InpaintModelConditioning` node or the `Set Latent Noise Mask` node, and then the sampler does it automatically.
#### Implementation details
* A [full change](https://github.com/exx8/differential-diffusion/issues/2#issuecomment-1738136217) of the mask is done, essentially $\fbox{mask >= threshold}$ instead of $\fbox{mask > threshold}$. This is important because the contents of the mask should be followed, and for latent inpainting/outpainting (where the mask becomes binary), the initial step _should_ paint _something_.
#### General info/tips
* Lighter areas get painted earlier, darker areas later.
* Try adjusting the mask's blur, opacity, brightness, exposure, etc.
* To get more of the original image, lower the mask's opacity/brightness or increase its blur.
* The more steps, the finer it's applied (due to finer thresholds).
* Lower your cfg if things look too noisy or fried.
#### Comparisons
These were made alongside the `InpaintModelConditioning` node _without_ an inpainting model.
| |
| :----: |
| ![ComfyUI_temp_vytsl_00046_](https://github.com/comfyanonymous/ComfyUI/assets/54494639/70becbb2-18b9-4307-837d-903018e36258) |
| | |
| :----: | :----: |
| <img height="512" alt="Example comparison 2" src="https://github.com/comfyanonymous/ComfyUI/assets/54494639/c593ccdc-4078-4e89-bb9f-d0611b23d2d3"> | <img height="512" alt="Example comparison 3" src="https://github.com/comfyanonymous/ComfyUI/assets/54494639/180061cf-2f69-4455-9ec6-17f1d2c4038e"> |
#### Example workflow
Here's a simple workflow to get started. (Note: It uses some custom nodes).
Feel free to use another blur node.
Also, see the official [inpaint examples](https://comfyanonymous.github.io/ComfyUI_examples/inpaint/).
<div align="center">
<img width="256" alt="Example workflow" src="https://github.com/shiimizu/ComfyUI_smZNodes/assets/https://github.com/comfyanonymous/ComfyUI/assets/54494639/5a9b143c-2b48-4d87-83a5-15e9448367df">
<p>Download this image to drag & drop it in ComfyUI.</p>
</div>
#### Issues
* ~~The callback doesn't seem to get called for UniPC samplers.~~
---
Resolves #2851, resolves #2671 | https://api.github.com/repos/comfyanonymous/ComfyUI/pulls/2876 | 2024-02-23T04:35:03Z | 2024-03-03T20:34:14Z | 2024-03-03T20:34:14Z | 2024-04-15T14:30:36Z | 1,384 | comfyanonymous/ComfyUI | 17,773 |
VW: Prep for MQB longitudinal | diff --git a/selfdrive/car/volkswagen/carcontroller.py b/selfdrive/car/volkswagen/carcontroller.py
index 5624c3dd5fcfde..816933f2f0168e 100644
--- a/selfdrive/car/volkswagen/carcontroller.py
+++ b/selfdrive/car/volkswagen/carcontroller.py
@@ -7,6 +7,7 @@
from selfdrive.car.volkswagen.values import CANBUS, PQ_CARS, CarControllerParams
VisualAlert = car.CarControl.HUDControl.VisualAlert
+LongCtrlState = car.CarControl.Actuators.LongControlState
class CarController:
@@ -25,7 +26,6 @@ def __init__(self, dbc_name, CP, VM):
def update(self, CC, CS, ext_bus):
actuators = CC.actuators
hud_control = CC.hudControl
-
can_sends = []
# **** Steering Controls ************************************************ #
@@ -71,9 +71,12 @@ def update(self, CC, CS, ext_bus):
# **** Acceleration Controls ******************************************** #
if self.frame % self.CCP.ACC_CONTROL_STEP == 0 and self.CP.openpilotLongitudinalControl:
- tsk_status = self.CCS.tsk_status_value(CS.out.cruiseState.available, CS.out.accFaulted, CC.longActive)
+ acc_control = self.CCS.acc_control_value(CS.out.cruiseState.available, CS.out.accFaulted, CC.longActive)
accel = clip(actuators.accel, self.CCP.ACCEL_MIN, self.CCP.ACCEL_MAX) if CC.longActive else 0
- can_sends.extend(self.CCS.create_acc_accel_control(self.packer_pt, CANBUS.pt, tsk_status, accel))
+ stopping = actuators.longControlState == LongCtrlState.stopping
+ starting = actuators.longControlState == LongCtrlState.starting
+ can_sends.extend(self.CCS.create_acc_accel_control(self.packer_pt, CANBUS.pt, CS.acc_type, CC.longActive, accel,
+ acc_control, stopping, starting, CS.out.cruiseState.standstill))
# **** HUD Controls ***************************************************** #
diff --git a/selfdrive/car/volkswagen/carstate.py b/selfdrive/car/volkswagen/carstate.py
index facc740a153a15..cf4a252b65bd7f 100644
--- a/selfdrive/car/volkswagen/carstate.py
+++ b/selfdrive/car/volkswagen/carstate.py
@@ -215,6 +215,7 @@ def update_pq(self, pt_cp, cam_cp, ext_cp, trans_type):
ret.stockAeb = False
# Update ACC radar status.
+ self.acc_type = 0 # TODO: this is ACC "basic" with nonzero min speed, support FtS (1) later
ret.cruiseState.available = bool(pt_cp.vl["Motor_5"]["GRA_Hauptschalter"])
ret.cruiseState.enabled = bool(pt_cp.vl["Motor_2"]["GRA_Status"])
if self.CP.pcmCruise:
diff --git a/selfdrive/car/volkswagen/interface.py b/selfdrive/car/volkswagen/interface.py
index 3ed7a6244d5d6d..821eef44c70ac5 100644
--- a/selfdrive/car/volkswagen/interface.py
+++ b/selfdrive/car/volkswagen/interface.py
@@ -38,6 +38,7 @@ def get_params(candidate, fingerprint=gen_empty_fingerprint(), car_fw=None, expe
if any(msg in fingerprint[1] for msg in (0x1A0, 0xC2)): # Bremse_1, Lenkwinkel_1
ret.networkLocation = NetworkLocation.gateway
+ ret.experimentalLongitudinalAvailable = True
else:
ret.networkLocation = NetworkLocation.fwdCamera
@@ -49,13 +50,6 @@ def get_params(candidate, fingerprint=gen_empty_fingerprint(), car_fw=None, expe
# Panda ALLOW_DEBUG firmware required.
ret.dashcamOnly = True
- if experimental_long and ret.networkLocation == NetworkLocation.gateway:
- # Proof-of-concept, prep for E2E only. No radar points available. Follow-to-stop not yet supported, but should
- # be simple to add when a suitable test car becomes available. Panda ALLOW_DEBUG firmware required.
- ret.experimentalLongitudinalAvailable = True
- ret.openpilotLongitudinalControl = True
- ret.safetyConfigs[0].safetyParam |= Panda.FLAG_VOLKSWAGEN_LONG_CONTROL
-
else:
# Set global MQB parameters
ret.safetyConfigs = [get_safety_config(car.CarParams.SafetyModel.volkswagen)]
@@ -87,6 +81,13 @@ def get_params(candidate, fingerprint=gen_empty_fingerprint(), car_fw=None, expe
# Global longitudinal tuning defaults, can be overridden per-vehicle
+ if experimental_long and candidate in PQ_CARS:
+ # Proof-of-concept, prep for E2E only. No radar points available. Panda ALLOW_DEBUG firmware required.
+ ret.openpilotLongitudinalControl = True
+ ret.safetyConfigs[0].safetyParam |= Panda.FLAG_VOLKSWAGEN_LONG_CONTROL
+ if ret.transmissionType == TransmissionType.manual:
+ ret.minEnableSpeed = 4.5
+
ret.pcmCruise = not ret.openpilotLongitudinalControl
ret.longitudinalActuatorDelayUpperBound = 0.5 # s
ret.longitudinalTuning.kpV = [0.1]
diff --git a/selfdrive/car/volkswagen/pqcan.py b/selfdrive/car/volkswagen/pqcan.py
index e64bb2246e4fcc..30f3fcf62d4137 100644
--- a/selfdrive/car/volkswagen/pqcan.py
+++ b/selfdrive/car/volkswagen/pqcan.py
@@ -35,15 +35,15 @@ def create_acc_buttons_control(packer, bus, gra_stock_values, counter, cancel=Fa
return packer.make_can_msg("GRA_Neu", bus, values)
-def tsk_status_value(main_switch_on, acc_faulted, long_active):
+def acc_control_value(main_switch_on, acc_faulted, long_active):
if long_active:
- tsk_status = 1
+ acc_control = 1
elif main_switch_on:
- tsk_status = 2
+ acc_control = 2
else:
- tsk_status = 0
+ acc_control = 0
- return tsk_status
+ return acc_control
def acc_hud_status_value(main_switch_on, acc_faulted, long_active):
@@ -59,26 +59,32 @@ def acc_hud_status_value(main_switch_on, acc_faulted, long_active):
return hud_status
-def create_acc_accel_control(packer, bus, adr_status, accel):
+def create_acc_accel_control(packer, bus, acc_type, enabled, accel, acc_control, stopping, starting, standstill):
+ commands = []
+
values = {
- "ACS_Sta_ADR": adr_status,
- "ACS_StSt_Info": adr_status != 1,
- "ACS_Typ_ACC": 0, # TODO: this is ACC "basic", find a way to detect FtS support (1)
- "ACS_Sollbeschl": accel if adr_status == 1 else 3.01,
- "ACS_zul_Regelabw": 0.2 if adr_status == 1 else 1.27,
- "ACS_max_AendGrad": 3.0 if adr_status == 1 else 5.08,
+ "ACS_Sta_ADR": acc_control,
+ "ACS_StSt_Info": acc_control != 1,
+ "ACS_Typ_ACC": acc_type,
+ "ACS_Sollbeschl": accel if acc_control == 1 else 3.01,
+ "ACS_zul_Regelabw": 0.2 if acc_control == 1 else 1.27,
+ "ACS_max_AendGrad": 3.0 if acc_control == 1 else 5.08,
}
- return packer.make_can_msg("ACC_System", bus, values)
+ commands.append(packer.make_can_msg("ACC_System", bus, values))
+
+ return commands
-def create_acc_hud_control(packer, bus, acc_status, set_speed, lead_visible):
+def create_acc_hud_control(packer, bus, acc_hud_status, set_speed, lead_visible):
values = {
- "ACA_StaACC": acc_status,
+ "ACA_StaACC": acc_hud_status,
"ACA_Zeitluecke": 2,
"ACA_V_Wunsch": set_speed,
"ACA_gemZeitl": 8 if lead_visible else 0,
+ # TODO: ACA_ID_StaACC, ACA_AnzDisplay, ACA_kmh_mph, ACA_PrioDisp, ACA_Aend_Zeitluecke
+ # display/display-prio handling probably needed to stop confusing the instrument cluster
+ # kmh_mph handling probably needed to resolve rounding errors in displayed setpoint
}
- # TODO: ACA_ID_StaACC, ACA_AnzDisplay, ACA_kmh_mph, ACA_PrioDisp, ACA_Aend_Zeitluecke
return packer.make_can_msg("ACC_GRA_Anziege", bus, values)
diff --git a/selfdrive/car/volkswagen/values.py b/selfdrive/car/volkswagen/values.py
index 7c55736362b278..6425cd60be8089 100755
--- a/selfdrive/car/volkswagen/values.py
+++ b/selfdrive/car/volkswagen/values.py
@@ -20,7 +20,6 @@
class CarControllerParams:
HCA_STEP = 2 # HCA_01/HCA_1 message frequency 50Hz
ACC_CONTROL_STEP = 2 # ACC_06/ACC_07/ACC_System frequency 50Hz
- ACC_HUD_STEP = 4 # ACC_GRA_Anziege frequency 25Hz
ACCEL_MAX = 2.0 # 2.0 m/s max acceleration
ACCEL_MIN = -3.5 # 3.5 m/s max deceleration
@@ -37,6 +36,7 @@ def __init__(self, CP):
if CP.carFingerprint in PQ_CARS:
self.LDW_STEP = 5 # LDW_1 message frequency 20Hz
+ self.ACC_HUD_STEP = 4 # ACC_GRA_Anziege frequency 25Hz
self.STEER_DRIVER_ALLOWANCE = 80 # Driver intervention threshold 0.8 Nm
self.STEER_DELTA_UP = 6 # Max HCA reached in 1.00s (STEER_MAX / (50Hz * 1.00))
self.STEER_DELTA_DOWN = 10 # Min HCA reached in 0.60s (STEER_MAX / (50Hz * 0.60))
@@ -65,6 +65,7 @@ def __init__(self, CP):
else:
self.LDW_STEP = 10 # LDW_02 message frequency 10Hz
+ self.ACC_HUD_STEP = 6 # ACC_02 message frequency 16Hz
self.STEER_DRIVER_ALLOWANCE = 80 # Driver intervention threshold 0.8 Nm
self.STEER_DELTA_UP = 4 # Max HCA reached in 1.50s (STEER_MAX / (50Hz * 1.50))
self.STEER_DELTA_DOWN = 10 # Min HCA reached in 0.60s (STEER_MAX / (50Hz * 0.60))
| Null-effect refactor of existing VW PQ longitudinal control: reduce diff with #22963 by bringing in the abstraction updates and other miscellaneous cleanup. | https://api.github.com/repos/commaai/openpilot/pulls/25777 | 2022-09-14T04:57:40Z | 2022-09-15T00:04:28Z | 2022-09-15T00:04:28Z | 2022-09-15T00:26:35Z | 2,603 | commaai/openpilot | 9,362 |
Test run black on self | diff --git a/.github/workflows/lint.yml b/.github/workflows/lint.yml
index 01cf31502b1..b5893756705 100644
--- a/.github/workflows/lint.yml
+++ b/.github/workflows/lint.yml
@@ -23,6 +23,11 @@ jobs:
run: |
python -m pip install --upgrade pip
python -m pip install -e '.[d]'
+ python -m pip install tox
- name: Lint
uses: pre-commit/action@v3.0.0
+
+ - name: Run On Self
+ run: |
+ tox -e run_self
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 26b7fe8c791..a6dedc44968 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -4,14 +4,6 @@ exclude: ^(src/blib2to3/|profiling/|tests/data/)
repos:
- repo: local
hooks:
- - id: black
- name: black
- language: system
- entry: black
- minimum_pre_commit_version: 2.9.2
- require_serial: true
- types_or: [python, pyi]
-
- id: check-pre-commit-rev-in-example
name: Check pre-commit rev in example
language: python
diff --git a/tests/test_format.py b/tests/test_format.py
index a8a922d17db..0e1059c61e4 100644
--- a/tests/test_format.py
+++ b/tests/test_format.py
@@ -1,5 +1,5 @@
from dataclasses import replace
-from typing import Any, Iterator, List
+from typing import Any, Iterator
from unittest.mock import patch
import pytest
@@ -14,47 +14,6 @@
all_data_cases,
)
-SOURCES: List[str] = [
- "src/black/__init__.py",
- "src/black/__main__.py",
- "src/black/brackets.py",
- "src/black/cache.py",
- "src/black/comments.py",
- "src/black/concurrency.py",
- "src/black/const.py",
- "src/black/debug.py",
- "src/black/files.py",
- "src/black/linegen.py",
- "src/black/lines.py",
- "src/black/mode.py",
- "src/black/nodes.py",
- "src/black/numerics.py",
- "src/black/output.py",
- "src/black/parsing.py",
- "src/black/report.py",
- "src/black/rusty.py",
- "src/black/strings.py",
- "src/black/trans.py",
- "src/blackd/__init__.py",
- "src/blib2to3/pygram.py",
- "src/blib2to3/pytree.py",
- "src/blib2to3/pgen2/conv.py",
- "src/blib2to3/pgen2/driver.py",
- "src/blib2to3/pgen2/grammar.py",
- "src/blib2to3/pgen2/literals.py",
- "src/blib2to3/pgen2/parse.py",
- "src/blib2to3/pgen2/pgen.py",
- "src/blib2to3/pgen2/tokenize.py",
- "src/blib2to3/pgen2/token.py",
- "setup.py",
- "tests/test_black.py",
- "tests/test_blackd.py",
- "tests/test_format.py",
- "tests/optional.py",
- "tests/util.py",
- "tests/conftest.py",
-]
-
@pytest.fixture(autouse=True)
def patch_dump_to_file(request: Any) -> Iterator[None]:
@@ -93,11 +52,6 @@ def test_preview_minimum_python_310_format(filename: str) -> None:
assert_format(source, expected, mode, minimum_version=(3, 10))
-@pytest.mark.parametrize("filename", SOURCES)
-def test_source_is_formatted(filename: str) -> None:
- check_file("", filename, DEFAULT_MODE, data=False)
-
-
# =============== #
# Complex cases
# ============= #
diff --git a/tox.ini b/tox.ini
index 258e6c5c203..7af9e48d6f0 100644
--- a/tox.ini
+++ b/tox.ini
@@ -1,5 +1,5 @@
[tox]
-envlist = {,ci-}py{36,37,38,39,310,py3},fuzz
+envlist = {,ci-}py{36,37,38,39,310,py3},fuzz,run_self
[testenv]
setenv = PYTHONPATH = {toxinidir}/src
@@ -61,3 +61,10 @@ commands =
coverage erase
coverage run fuzz.py
coverage report
+
+[testenv:run_self]
+setenv = PYTHONPATH = {toxinidir}/src
+skip_install = True
+commands =
+ pip install -e .[d]
+ black --check {toxinidir}/src {toxinidir}/tests {toxinidir}/setup.py
| ### Description
Removed the hard coded `SOURCE` list from *test_format.py*. Now, black is run as part of the *Lint* CI and looks for sources automatically.
### Checklist - did you ...
<!-- If any of the following items aren't relevant for your contribution
please still tick them so we know you've gone through the checklist.
All user-facing changes should get an entry. Otherwise, signal to us
this should get the magical label to silence the CHANGELOG entry check.
Tests are required for bugfixes and new features. Documentation changes
are necessary for formatting and most enhancement changes. -->
- [ ] Add a CHANGELOG entry if necessary?
- [X] Add / update tests if necessary?
- [ ] Add new / update outdated documentation?
<!-- Just as a reminder, everyone in all psf/black spaces including PRs
must follow the PSF Code of Conduct (link below).
Finally, once again thanks for your time and effort. If you have any
feedback in regards to your experience contributing here, please
let us know!
Helpful links:
PSF COC: https://www.python.org/psf/conduct/
Contributing docs: https://black.readthedocs.io/en/latest/contributing/index.html
Chat on Python Discord: https://discord.gg/RtVdv86PrH -->
| https://api.github.com/repos/psf/black/pulls/3114 | 2022-06-08T12:33:03Z | 2022-06-14T16:08:36Z | 2022-06-14T16:08:36Z | 2022-06-15T07:29:58Z | 1,182 | psf/black | 24,054 |
[Add] : Wildcard Matching program under DYNAMIC PROGRAMMING | diff --git a/dynamic_programming/wildcard_matching.py b/dynamic_programming/wildcard_matching.py
new file mode 100644
index 000000000000..4ffc4b5d46aa
--- /dev/null
+++ b/dynamic_programming/wildcard_matching.py
@@ -0,0 +1,62 @@
+"""
+Given two strings, an input string and a pattern,
+this program checks if the input string matches the pattern.
+
+Example :
+input_string = "baaabab"
+pattern = "*****ba*****ab"
+Output: True
+
+This problem can be solved using the concept of "DYNAMIC PROGRAMMING".
+
+We create a 2D boolean matrix, where each entry match_matrix[i][j] is True
+if the first i characters in input_string match the first j characters
+of pattern. We initialize the first row and first column based on specific
+rules, then fill up the rest of the matrix using a bottom-up dynamic
+programming approach.
+
+The amount of match that will be determined is equal to match_matrix[n][m]
+where n and m are lengths of the input_string and pattern respectively.
+
+"""
+
+
+def is_pattern_match(input_string: str, pattern: str) -> bool:
+ """
+ >>> is_pattern_match('baaabab','*****ba*****ba')
+ False
+ >>> is_pattern_match('baaabab','*****ba*****ab')
+ True
+ >>> is_pattern_match('aa','*')
+ True
+ """
+
+ input_length = len(input_string)
+ pattern_length = len(pattern)
+
+ match_matrix = [[False] * (pattern_length + 1) for _ in range(input_length + 1)]
+
+ match_matrix[0][0] = True
+
+ for j in range(1, pattern_length + 1):
+ if pattern[j - 1] == "*":
+ match_matrix[0][j] = match_matrix[0][j - 1]
+
+ for i in range(1, input_length + 1):
+ for j in range(1, pattern_length + 1):
+ if pattern[j - 1] in ("?", input_string[i - 1]):
+ match_matrix[i][j] = match_matrix[i - 1][j - 1]
+ elif pattern[j - 1] == "*":
+ match_matrix[i][j] = match_matrix[i - 1][j] or match_matrix[i][j - 1]
+ else:
+ match_matrix[i][j] = False
+
+ return match_matrix[input_length][pattern_length]
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
+
+ print(f"{is_pattern_match('baaabab','*****ba*****ab')}")
| ### Describe your change:
This PR adds the wildcard matching program under dynamic programming.
Wildcard Matching :
![image](https://github.com/TheAlgorithms/Python/assets/118645569/a92449af-3b0b-4f82-8a90-965c438bda51)
![wildcard-pattern-matching](https://github.com/TheAlgorithms/Python/assets/118645569/426c6487-d12e-4d73-ae04-7520bc1e2cda)
* [x] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [ ] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [x] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [x] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms include at least one URL that points to Wikipedia or another similar explanation.
* [x] If this pull request resolves one or more open issues then the description above includes the issue number(s) with a [closing keyword](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue): "Fixes #ISSUE-NUMBER".
| https://api.github.com/repos/TheAlgorithms/Python/pulls/10403 | 2023-10-14T03:52:15Z | 2023-10-15T19:27:47Z | 2023-10-15T19:27:47Z | 2023-10-15T19:27:47Z | 626 | TheAlgorithms/Python | 30,130 |
Ciphers | diff --git a/docs/ciphers.rst b/docs/ciphers.rst
new file mode 100644
index 00000000000..12c403d09a8
--- /dev/null
+++ b/docs/ciphers.rst
@@ -0,0 +1,211 @@
+============
+Ciphersuites
+============
+
+.. contents:: Table of Contents
+ :local:
+
+
+.. _ciphersuites:
+
+Introduction
+============
+
+Autoupdates
+-----------
+
+Within certain limits, TLS server software can choose what kind of
+cryptography to use when a client connects. These choices can affect
+security, compatibility, and performance in complex ways. Most of
+these options are independent of a particular certificate. The Let's
+Encrypt client tries to provide defaults that we think are most useful
+to our users.
+
+As described below, the Let's Encrypt client will default to modifying
+server software's cryptographic settings to keep these up-to-date with
+what we think are appropriate defaults when new versions of the Let's
+Encrypt client are installed (for example, by an operating system package
+manager).
+
+When this feature is implemented, this document will be updated
+to describe how to disable these automatic changes.
+
+
+Cryptographic choices
+---------------------
+
+Software that uses cryptography must inevitably make choices about what
+kind of cryptography to use and how. These choices entail assumptions
+about how well particular cryptographic mechanisms resist attack, and what
+trade-offs are available and appropriate. The choices are constrained
+by compatibility issues (in order to interoperate with other software,
+an implementation must agree to use cryptographic mechanisms that the
+other side also supports) and protocol issues (cryptographic mechanisms
+must be specified in protocols and there must be a way to agree to use
+them in a particular context).
+
+The best choices for a particular application may change over time in
+response to new research, new standardization events, changes in computer
+hardware, and changes in the prevalence of legacy software. Much important
+research on cryptanalysis and cryptographic vulnerabilities is unpublished
+because many researchers have been working in the interest of improving
+some entities' communications security while weakening, or failing to
+improve, others' security. But important information that improves our
+understanding of the state of the art is published regularly.
+
+When enabling TLS support in a compatible web server (which is a separate
+step from obtaining a certificate), Let's Encrypt has the ability to
+update that web server's TLS configuration. Again, this is *different
+from the cryptographic particulars of the certificate itself*; the
+certificate as of the initial release will be RSA-signed using one of
+Let's Encrypt's 2048-bit RSA keys, and will describe the subscriber's
+RSA public key ("subject public key") of at least 2048 bits, which is
+used for key establishment.
+
+Note that the subscriber's RSA public key can be used in a wide variety
+of key establishment methods, most of which do not use RSA directly
+for key exchange, but only for authenticating the server! For example,
+in DHE and ECDHE key exchanges, the subject public key is just used to
+sign other parameters for authentication. You do not have to "use RSA"
+for other purposes just because you're using an RSA key for authentication.
+
+The certificate doesn't specify other cryptographic or ciphersuite
+particulars; for example, it doesn't say whether or not parties should
+use a particular symmetric algorithm like 3DES, or what cipher modes
+they should use. All of these details are negotiated between client
+and server independent of the content of the ciphersuite. The
+Let's Encrypt project hopes to provide useful defaults that reflect
+good security choices with respect to the publicly-known state of the
+art. However, the Let's Encrypt certificate authority does *not*
+dictate end-users' security policy, and any site is welcome to change
+its preferences in accordance with its own policy or its administrators'
+preferences, and use different cryptographic mechanisms or parameters,
+or a different priority order, than the defaults provided by the Let's
+Encrypt client.
+
+If you don't use the Let's Encrypt client to configure your server
+directly, because the client doesn't integrate with your server software
+or because you chose not to use this integration, then the cryptographic
+defaults haven't been modified, and the cryptography chosen by the server
+will still be whatever the default for your software was. For example,
+if you obtain a certificate using *standalone* mode and then manually
+install it in an IMAP or LDAP server, your cryptographic settings will
+not be modified by the client in any way.
+
+
+Sources of defaults
+-------------------
+
+Initially, the Let's Encrypt client will configure users' servers to
+use the cryptographic defaults recommended by the Mozilla project.
+These settings are well-reasoned recommendations that carefully
+consider client software compatibility. They are described at
+
+https://wiki.mozilla.org/Security/Server_Side_TLS
+
+and the version implemented by the Let's Encrypt client will be the
+version that was most current as of the release date of each client
+version. Mozilla offers three seperate sets of cryptographic options,
+which trade off security and compatibility differently. These are
+referred to as as the "Modern", "Intermediate", and "Old" configurations
+(in order from most secure to least secure, and least-backwards compatible
+to most-backwards compatible). The client will follow the Mozilla defaults
+for the *Intermediate* configuration by default, at least with regards to
+ciphersuites and TLS versions. Mozilla's web site describes which client
+software will be compatible with each configuration. You can also use
+the Qualys SSL Labs site, which the Let's Encrypt software will suggest
+when installing a certificate, to test your server and see whether it
+will be compatible with particular software versions.
+
+It will be possible to ask the Let's Encrypt client to instead apply
+(and track) Modern or Old configurations.
+
+The Let's Encrypt project expects to follow the Mozilla recommendations
+in the future as those recommendations are updated. (For example, some
+users have proposed prioritizing a new ciphersuite known as ``0xcc13``
+which uses the ChaCha and Poly1305 algorithms, and which is already
+implemented by the Chrome browser. Mozilla has delayed recommending
+``0xcc13`` over compatibility and standardization concerns, but is likely
+to recommend it in the future once these concerns have been addressed. At
+that point, the Let's Encrypt client would likely follow the Mozilla
+recommendations and favor the use of this ciphersuite as well.)
+
+The Let's Encrypt project may deviate from the Mozilla recommendations
+in the future if good cause is shown and we believe our users'
+priorities would be well-served by doing so. In general, please address
+relevant proposals for changing priorities to the Mozilla security
+team first, before asking the Let's Encrypt project to change the
+client's priorities. The Mozilla security team is likely to have more
+resources and expertise to bring to bear on evaluating reasons why its
+recommendations should be updated.
+
+The Let's Encrpyt project will entertain proposals to create a *very*
+small number of alternative configurations (apart from Modern,
+Intermediate, and Old) that there's reason to believe would be widely
+used by sysadmins; this would usually be a preferable course to modifying
+an existing configuration. For example, if many sysadmins want their
+servers configured to track a different expert recommendation, Let's
+Encrypt could add an option to do so.
+
+
+Resources for recommendations
+-----------------------------
+
+In the course of considering how to handle this issue, we received
+recommendations with sources of expert guidance on ciphersuites and other
+cryptographic parameters. We're grateful to everyone who contributed
+suggestions. The recommendations we received are available at
+
+https://github.com/letsencrypt/letsencrypt/wiki/Ciphersuite-guidance
+
+Let's Encrypt client users are welcome to review these authorities to
+better inform their own cryptographic parameter choices. We also
+welcome suggestions of other resources to add to this list. Please keep
+in mind that different recommendations may reflect different priorities
+or evaluations of trade-offs, especially related to compatibility!
+
+
+Changing your settings
+----------------------
+
+This will probably look something like
+
+..code-block: shell
+
+ letsencrypt --cipher-recommendations mozilla-secure
+ letsencrypt --cipher-recommendations mozilla-intermediate
+ letsencrypt --cipher-recommendations mozilla-old
+
+to track Mozilla's *Secure*, *Intermediate*, or *Old* recommendations,
+and
+
+..code-block: shell
+
+ letsencrypt --update-ciphers on
+
+to enable updating ciphers with each new Let's Encrypt client release,
+or
+
+..code-block: shell
+
+ letsencrypt --update-ciphers off
+
+to disable automatic configuration updates. These features have not yet
+been implemented and this syntax may change then they are implemented.
+
+
+TODO
+----
+
+The status of this feature is tracked as part of issue #1123 in our
+bug tracker.
+
+https://github.com/letsencrypt/letsencrypt/issues/1123
+
+Prior to implementation of #1123, the client does not actually modify
+ciphersuites (this is intended to be implemented as a "configuration
+enhancement", but the only configuration enhancement implemented
+so far is redirecting HTTP requests to HTTPS in web servers, the
+"redirect" enhancement). The changes here would probably be either a new
+"ciphersuite" enhancement in each plugin that provides an installer,
+or a family of enhancements, one per selectable ciphersuite configuration.
| Fixes #555 (in terms of providing and documenting a course of action, not implementing it!).
The theory is that we have decided to track Mozilla's ciphersuite recommendations, which Mozilla has extensively considered for web client compatibility issues and continues to update. We are also open to providing options to track other sources of recommendations, recognizing that different sites have different needs. This patch documents the issues involved in general terms and sets out what we have decided to do. As @pde described, we still need to actually implement these things, which is tracked at #1123.
| https://api.github.com/repos/certbot/certbot/pulls/1261 | 2015-10-31T16:02:22Z | 2015-11-02T05:52:18Z | 2015-11-02T05:52:17Z | 2016-05-06T19:22:16Z | 2,149 | certbot/certbot | 1,371 |
Catch AttributeError in utils.super_len | diff --git a/HISTORY.md b/HISTORY.md
index ccf4e17400..99d195a798 100644
--- a/HISTORY.md
+++ b/HISTORY.md
@@ -11,6 +11,9 @@ dev
backwards compatible as it inherits from previously thrown exceptions.
Can be caught from `requests.exceptions.RequestException` as well.
+- Catch `AttributeError` when calculating length of files obtained by
+ `Tarfile.extractfile()`
+
2.26.0 (2021-07-13)
-------------------
@@ -1702,7 +1705,7 @@ This is not a backwards compatible change.
- Automatic Authentication API Change
- Smarter Query URL Parameterization
- Allow file uploads and POST data together
--
+-
New Authentication Manager System
@@ -1721,7 +1724,7 @@ This is not a backwards compatible change.
0.2.3 (2011-02-15)
------------------
--
+-
New HTTPHandling Methods
diff --git a/requests/utils.py b/requests/utils.py
index 41bfb82fe0..6a9f549934 100644
--- a/requests/utils.py
+++ b/requests/utils.py
@@ -124,7 +124,10 @@ def super_len(o):
elif hasattr(o, 'fileno'):
try:
fileno = o.fileno()
- except io.UnsupportedOperation:
+ except (io.UnsupportedOperation, AttributeError):
+ # AttributeError is a surprising exception, seeing as how we've just checked
+ # that `hasattr(o, 'fileno')`. It happens for objects obtained via
+ # `Tarfile.extractfile()`, per issue 5229.
pass
else:
total_length = os.fstat(fileno).st_size
diff --git a/tests/test_utils.py b/tests/test_utils.py
index 98ffb25a6c..559dee657f 100644
--- a/tests/test_utils.py
+++ b/tests/test_utils.py
@@ -4,6 +4,7 @@
import copy
import filecmp
from io import BytesIO
+import tarfile
import zipfile
from collections import deque
@@ -86,6 +87,18 @@ def test_file(self, tmpdir, mode, warnings_num, recwarn):
assert super_len(fd) == 4
assert len(recwarn) == warnings_num
+ def test_tarfile_member(self, tmpdir):
+ file_obj = tmpdir.join('test.txt')
+ file_obj.write('Test')
+
+ tar_obj = str(tmpdir.join('test.tar'))
+ with tarfile.open(tar_obj, 'w') as tar:
+ tar.add(str(file_obj), arcname='test.txt')
+
+ with tarfile.open(tar_obj) as tar:
+ member = tar.extractfile('test.txt')
+ assert super_len(member) == 4
+
def test_super_len_with__len__(self):
foo = [1,2,3,4]
len_foo = super_len(foo)
| This allows it to handle files obtained via `Tarfile.extractfile()`.
Fixes #5229. | https://api.github.com/repos/psf/requests/pulls/5239 | 2019-10-21T11:55:23Z | 2021-11-28T20:03:31Z | 2021-11-28T20:03:31Z | 2022-02-26T21:00:46Z | 671 | psf/requests | 32,531 |
Backport PR #46055 on branch 1.4.x (DOC: Add "build C extensions" note) | diff --git a/doc/source/development/contributing_environment.rst b/doc/source/development/contributing_environment.rst
index 5f36a2a609c9f..e2c1463b7dfba 100644
--- a/doc/source/development/contributing_environment.rst
+++ b/doc/source/development/contributing_environment.rst
@@ -26,14 +26,28 @@ with a full pandas development environment.
**Docker Commands**
-Pass your GitHub username in the ``DockerFile`` to use your own fork::
+Build the Docker image::
# Build the image pandas-yourname-env
docker build --tag pandas-yourname-env .
- # Run a container and bind your local forked repo, pandas-yourname, to the container
- docker run -it --rm -v path-to-pandas-yourname:/home/pandas-yourname pandas-yourname-env
+ # Or build the image by passing your GitHub username to use your own fork
+ docker build --build-arg gh_username=yourname --tag pandas-yourname-env .
-Even easier, you can integrate Docker with the following IDEs:
+Run Container::
+
+ # Run a container and bind your local repo to the container
+ docker run -it -w /home/pandas --rm -v path-to-local-pandas-repo:/home/pandas pandas-yourname-env
+
+.. note::
+ If you bind your local repo for the first time, you have to build the C extensions afterwards.
+ Run the following command inside the container::
+
+ python setup.py build_ext -j 4
+
+ You need to rebuild the C extensions anytime the Cython code in ``pandas/_libs`` changes.
+ This most frequently occurs when changing or merging branches.
+
+*Even easier, you can integrate Docker with the following IDEs:*
**Visual Studio Code**
@@ -47,11 +61,6 @@ Enable Docker support and use the Services tool window to build and manage image
run and interact with containers.
See https://www.jetbrains.com/help/pycharm/docker.html for details.
-Note that you might need to rebuild the C extensions if/when you merge with upstream/main using::
-
- python setup.py build_ext -j 4
-
-
Creating an environment without Docker
---------------------------------------
| Backport PR #46055: DOC: Add "build C extensions" note | https://api.github.com/repos/pandas-dev/pandas/pulls/46168 | 2022-02-27T15:58:25Z | 2022-02-27T18:10:20Z | 2022-02-27T18:10:20Z | 2022-02-27T18:10:20Z | 503 | pandas-dev/pandas | 45,178 |
Switch to Go 1.5.1 (fixes #955) | diff --git a/.travis.yml b/.travis.yml
index 3041fdd824b..ff8a1038a4c 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -7,7 +7,7 @@ services:
# http://docs.travis-ci.com/user/ci-environment/#CI-environment-OS
# gimme has to be kept in sync with Boulder's Go version setting in .travis.yml
before_install:
- - '[ "xxx$BOULDER_INTEGRATION" = "xxx" ] || eval "$(gimme 1.5)"'
+ - '[ "xxx$BOULDER_INTEGRATION" = "xxx" ] || eval "$(gimme 1.5.1)"'
# using separate envs with different TOXENVs creates 4x1 Travis build
# matrix, which allows us to clearly distinguish which component under
| https://api.github.com/repos/certbot/certbot/pulls/1006 | 2015-10-16T22:28:43Z | 2015-10-17T17:44:04Z | 2015-10-17T17:44:04Z | 2016-05-06T19:21:35Z | 203 | certbot/certbot | 414 |
|
Scope all hubconf.py imports for torch.hub.list() | diff --git a/hubconf.py b/hubconf.py
index 1876441d8a8..3b3dfe0e9e2 100644
--- a/hubconf.py
+++ b/hubconf.py
@@ -5,15 +5,8 @@
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
"""
-from pathlib import Path
-
import torch
-from utils.general import check_requirements, set_logging
-
-dependencies = ['torch', 'yaml']
-check_requirements(Path(__file__).parent / 'requirements.txt', exclude=('tensorboard', 'pycocotools', 'thop'))
-
def _create(name, pretrained=True, channels=3, classes=80, autoshape=True, verbose=True):
"""Creates a specified YOLOv5 model
@@ -29,11 +22,16 @@ def _create(name, pretrained=True, channels=3, classes=80, autoshape=True, verbo
Returns:
YOLOv5 pytorch model
"""
+ from pathlib import Path
+
from models.yolo import Model, attempt_load
+ from utils.general import check_requirements, set_logging
from utils.google_utils import attempt_download
from utils.torch_utils import select_device
+ check_requirements(Path(__file__).parent / 'requirements.txt', exclude=('tensorboard', 'pycocotools', 'thop'))
set_logging(verbose=verbose)
+
fname = Path(name).with_suffix('.pt') # checkpoint filename
try:
if pretrained and channels == 3 and classes == 80:
|
## 🛠️ PR Summary
<sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub>
### 🌟 Summary
Streamlined dependency checks in `hubconf.py` for YOLOv5 models.
### 📊 Key Changes
- Removed the global import of `Path` from `pathlib`.
- Moved `check_requirements` and `set_logging` imports inside the `_create` function.
- Restructured the code to check requirements only within the `_create` function.
### 🎯 Purpose & Impact
- 🧹 **Cleaner Code Organization:** Importing modules only where they're needed makes the code less cluttered.
- ⏱ **Performance Improvement:** May slightly improve the load time of the module by avoiding checking for requirements until necessary.
- 🛠 **User Experience:** Users can potentially see a more streamlined interaction when loading models, with more focused error messages if there are missing requirements. | https://api.github.com/repos/ultralytics/yolov5/pulls/3145 | 2021-05-12T18:25:13Z | 2021-05-12T18:28:26Z | 2021-05-12T18:28:26Z | 2024-01-19T18:20:26Z | 351 | ultralytics/yolov5 | 25,645 |
C++20 is more up to date than C++17 | diff --git a/CppCoreGuidelines.md b/CppCoreGuidelines.md
index 71bf648af..2ad06fdef 100644
--- a/CppCoreGuidelines.md
+++ b/CppCoreGuidelines.md
@@ -241,7 +241,7 @@ All C++ programmers. This includes [programmers who might consider C](#S-cpl).
## <a name="SS-aims"></a>In.aims: Aims
-The purpose of this document is to help developers to adopt modern C++ (currently C++17) and to achieve a more uniform style across code bases.
+The purpose of this document is to help developers to adopt modern C++ (currently C++20 and C++17) and to achieve a more uniform style across code bases.
We do not suffer the delusion that every one of these rules can be effectively applied to every code base. Upgrading old systems is hard. However, we do believe that a program that uses a rule is less error-prone and more maintainable than one that does not. Often, rules also lead to faster/easier initial development.
As far as we can tell, these rules lead to code that performs as well or better than older, more conventional techniques; they are meant to follow the zero-overhead principle ("what you don't use, you don't pay for" or "when you use an abstraction mechanism appropriately, you get at least as good performance as if you had handcoded using lower-level language constructs").
| This language matches that of the paragraph a few lines above | https://api.github.com/repos/isocpp/CppCoreGuidelines/pulls/2085 | 2023-06-22T13:26:58Z | 2023-06-22T16:25:53Z | 2023-06-22T16:25:53Z | 2023-06-22T16:26:19Z | 314 | isocpp/CppCoreGuidelines | 16,117 |
Added Transfer Learning for NLP | diff --git a/books.md b/books.md
index 5bc43371..b9efc489 100644
--- a/books.md
+++ b/books.md
@@ -92,6 +92,7 @@ The following is a list of free and/or open source books on machine learning, st
* [Deep Learning for Natural Lanuage Processing](https://www.manning.com/books/deep-learning-for-natural-language-processing) Early access book
* [Natural Language Processing in Action, Second Edition](https://www.manning.com/books/natural-language-processing-in-action-second-edition) Early access book
* [Getting Started with Natural Language Processing in Action](https://www.manning.com/books/getting-started-with-natural-language-processing) Early access book
+* [Transfer Learnin for Natural Language Processing](https://www.manning.com/books/transfer-learning-for-natural-language-processing) by Paul Azunre
## Information Retrieval
| Hi,
Stjepan from Manning here. I thought this title might be a good match for your list. Thank you for considering it.
Best, | https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/825 | 2021-11-02T09:33:58Z | 2021-11-23T14:19:23Z | 2021-11-23T14:19:23Z | 2021-11-23T14:19:23Z | 198 | josephmisiti/awesome-machine-learning | 51,912 |
Clear chatbox input only after response it generated | diff --git a/website/src/components/Chat/ChatConversation.tsx b/website/src/components/Chat/ChatConversation.tsx
index 23ef864400..ce5138d686 100644
--- a/website/src/components/Chat/ChatConversation.tsx
+++ b/website/src/components/Chat/ChatConversation.tsx
@@ -82,7 +82,6 @@ export const ChatConversation = memo(function ChatConversation({ chatId, getConf
return;
}
setIsSending.on();
- inputRef.current!.value = "";
// TODO: maybe at some point we won't need to access the rendered HTML directly, but use react state
const parentId = document.getElementById(LAST_ASSISTANT_MESSAGE_ID)?.dataset.id ?? null;
@@ -113,6 +112,7 @@ export const ChatConversation = memo(function ChatConversation({ chatId, getConf
}
setMessages((messages) => [...messages, prompter_message]);
+ inputRef.current!.value = "";
// after creating the prompters message, handle the assistant's case
await createAndFetchAssistantMessage({ parentId: prompter_message.id, chatId });
}, [setIsSending, chatId, messages, createAndFetchAssistantMessage, toast, isSending]);
| Now the user will have better experience chatting with assistant. The chat box input is only cleared after response is generated. Fixes #2604 | https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/2768 | 2023-04-20T04:17:03Z | 2023-04-20T05:30:45Z | 2023-04-20T05:30:45Z | 2023-04-20T05:30:45Z | 270 | LAION-AI/Open-Assistant | 37,581 |
[context]support arbitary module materialization. | diff --git a/colossalai/utils/model/lazy_init_context.py b/colossalai/utils/model/lazy_init_context.py
index 290ab7aace59..a72c59fee074 100644
--- a/colossalai/utils/model/lazy_init_context.py
+++ b/colossalai/utils/model/lazy_init_context.py
@@ -8,6 +8,7 @@
import typing
from typing import List, Callable
from colossalai.utils.model.utils import substitute_init_recursively
+import copy
class LazyInitContext():
@@ -102,7 +103,8 @@ def _wrap_module_init(self, func):
has_device = 'device' in inspect.signature(func).parameters
def layer_lazy_init(module, *args, **kwargs):
- self._intercepted_init_func_cache.append(dict(func=func, module=module, args=args, kwargs=kwargs))
+ self._intercepted_init_func_cache.append(
+ dict(func=func, module=module, args=args, kwargs=copy.deepcopy(kwargs)))
if has_device:
kwargs['device'] = 'meta'
func(module, *args, **kwargs)
@@ -162,6 +164,12 @@ def __enter__(self):
def __exit__(self, *args, **kwargs):
self._unpatch_submodule_init()
+ # build model_rebuild_dict in reverse order to make sure get correct init func for inherited class.
+ self.module_rebuild_dict = {}
+ self._intercepted_init_func_cache.reverse()
+ for cache in self._intercepted_init_func_cache:
+ self.module_rebuild_dict[cache['module']] = (cache['func'], cache['args'], cache['kwargs'])
+ self._intercepted_init_func_cache.reverse()
def lazy_init_parameters(self, model: torch.nn.Module, device='cpu', call_back: Callable = None):
"""
@@ -179,7 +187,34 @@ def lazy_init_parameters(self, model: torch.nn.Module, device='cpu', call_back:
for name, buffer in model.named_buffers():
param_id_to_name[id(buffer)] = name
+ assert model in self.module_rebuild_dict, 'We only support rebuild modules which intercepted during initializing by us.'
+
+ def _process_arg(arg):
+ """
+ Process args recursively. If arg is a torch.nn.Module instance in module_rebuild_dict,
+ we need to rebuild it with real parameters. If arg is a tuple or list, we will process
+ the element of arg with this function again.
+ """
+ if torch.is_tensor(arg):
+ tensor_id = id(arg)
+ if tensor_id in param_id_to_name:
+ arg = _replace_meta_param_with_real_param(arg)
+
+ elif isinstance(arg, torch.nn.Module):
+ if arg in self.module_rebuild_dict:
+ arg = self.lazy_init_parameters(model=arg, device=device, call_back=call_back)
+
+ elif isinstance(arg, (tuple, list)):
+ rst_list = []
+ for element in arg:
+ processed_element = _process_arg(element)
+ rst_list.append(processed_element)
+ arg = rst_list
+ return arg
+
def _replace_meta_param_with_real_param(meta_param):
+ if meta_param.device != 'meta':
+ return meta_param
tensor_id = id(meta_param)
param_full_name = param_id_to_name[tensor_id]
real_param = torch.empty_like(meta_param, dtype=meta_param.dtype, device=device)
@@ -199,36 +234,24 @@ def _replace_meta_param_with_real_param(meta_param):
call_back(real_param)
return real_param
- # build modules
- # visit the cache list in reverse order
- for index in range(len(self._intercepted_init_func_cache)):
- cache = self._intercepted_init_func_cache[len(self._intercepted_init_func_cache) - index - 1]
- func = cache['func']
- module = cache['module']
- args = list(cache['args'])
- kwargs = cache['kwargs']
-
- # check args for parameter replacement
- for idx, arg in enumerate(args):
- if torch.is_tensor(arg):
- tensor_id = id(arg)
-
- if tensor_id not in param_id_to_name:
- continue
- else:
- arg = _replace_meta_param_with_real_param(arg)
- args[idx] = arg
-
- # check kwargs for parameter replacement
- for arg_name, arg in enumerate(kwargs):
- if torch.is_tensor(arg):
- tensor_id = id(arg)
-
- if tensor_id not in param_id_to_name:
- continue
- else:
- arg = _replace_meta_param_with_real_param(arg)
- kwargs[arg_name] = arg
-
- with torch.no_grad():
- func(module, *args, **kwargs)
+ func, args, kwargs = self.module_rebuild_dict[model]
+ args = list(args)
+
+ # check args for parameter replacement
+ for idx, arg in enumerate(args):
+ arg = _process_arg(arg)
+ args[idx] = arg
+
+ # check kwargs for parameter replacement
+ for arg_name, arg in kwargs.items():
+ if arg_name == 'device':
+ arg = device
+ else:
+ arg = _process_arg(arg)
+ kwargs[arg_name] = arg
+
+ # build user specified model
+ with torch.no_grad():
+ func(model, *args, **kwargs)
+
+ return model
diff --git a/tests/test_utils/test_lazy_init_ctx.py b/tests/test_utils/test_lazy_init_ctx.py
index 4d4c0598c565..fccf1588b2da 100644
--- a/tests/test_utils/test_lazy_init_ctx.py
+++ b/tests/test_utils/test_lazy_init_ctx.py
@@ -1,9 +1,20 @@
import torch
from colossalai.utils.model.lazy_init_context import LazyInitContext
from torchvision.models import resnet34
+import random
+import numpy as np
+
+MANUAL_SEED = 0
+random.seed(MANUAL_SEED)
+np.random.seed(MANUAL_SEED)
+torch.manual_seed(MANUAL_SEED)
def test_lazy_init():
+ cpu_rng_state = torch.get_rng_state()
+ origin_model = resnet34(num_classes=10)
+ origin_param_dict = dict(origin_model.named_parameters())
+ torch.set_rng_state(cpu_rng_state)
ctx = LazyInitContext()
with ctx:
model = resnet34(num_classes=10)
@@ -16,6 +27,9 @@ def test_lazy_init():
assert not param.is_meta
for buffer in model.buffers():
assert not buffer.is_meta
+ param_dict = dict(model.named_parameters())
+ for key in origin_param_dict.keys():
+ assert origin_param_dict[key].data.equal(param_dict[key].data)
if __name__ == '__main__':
diff --git a/tests/test_utils/test_materialize_arbitary_lazy_module.py b/tests/test_utils/test_materialize_arbitary_lazy_module.py
new file mode 100644
index 000000000000..b84293490cb6
--- /dev/null
+++ b/tests/test_utils/test_materialize_arbitary_lazy_module.py
@@ -0,0 +1,55 @@
+import torch
+from colossalai.utils.model.lazy_init_context import LazyInitContext
+from torchvision.models import resnet34
+import random
+import numpy as np
+
+MANUAL_SEED = 0
+random.seed(MANUAL_SEED)
+np.random.seed(MANUAL_SEED)
+torch.manual_seed(MANUAL_SEED)
+
+
+class MLP(torch.nn.Module):
+
+ def __init__(self, dim: int = 4):
+ super().__init__()
+ intermediate_dim = dim * 4
+ self.dense_1 = torch.nn.Linear(dim, intermediate_dim)
+ self.activation = torch.nn.GELU()
+ self.dense_2 = torch.nn.Linear(intermediate_dim, dim)
+ self.dropout = torch.nn.Dropout(0.1)
+
+ def forward(self, x):
+ x = self.dense_1(x)
+ x = self.activation(x)
+ x = self.dense_2(x)
+ x = self.dropout(x)
+ return x
+
+
+def test_lazy_init():
+ cpu_rng_state = torch.get_rng_state()
+ origin_model = MLP()
+ origin_param_dict = dict(origin_model.named_parameters())
+ torch.set_rng_state(cpu_rng_state)
+ ctx = LazyInitContext()
+ with ctx:
+ model = MLP()
+ for param in model.parameters():
+ assert param.is_meta
+ for buffer in model.buffers():
+ assert buffer.is_meta
+ for module in model.children():
+ ctx.lazy_init_parameters(module)
+ for param in module.parameters():
+ assert not param.is_meta
+ for buffer in module.buffers():
+ assert not buffer.is_meta
+ param_dict = dict(model.named_parameters())
+ for key in origin_param_dict.keys():
+ assert origin_param_dict[key].data.equal(param_dict[key].data)
+
+
+if __name__ == '__main__':
+ test_lazy_init()
| https://api.github.com/repos/hpcaitech/ColossalAI/pulls/1193 | 2022-06-30T07:17:48Z | 2022-07-04T02:12:02Z | 2022-07-04T02:12:02Z | 2022-07-04T02:12:02Z | 2,021 | hpcaitech/ColossalAI | 11,862 |
|
[MRG] Use _check_sample_weight in BaseForest | diff --git a/sklearn/ensemble/_forest.py b/sklearn/ensemble/_forest.py
index 39af503c43279..0489805e19fc2 100644
--- a/sklearn/ensemble/_forest.py
+++ b/sklearn/ensemble/_forest.py
@@ -61,7 +61,7 @@ class calls the ``fit`` method of each sub-estimator on random samples
from ._base import BaseEnsemble, _partition_estimators
from ..utils.fixes import _joblib_parallel_args
from ..utils.multiclass import check_classification_targets
-from ..utils.validation import check_is_fitted
+from ..utils.validation import check_is_fitted, _check_sample_weight
__all__ = ["RandomForestClassifier",
@@ -249,8 +249,7 @@ def decision_path(self, X):
X = self._validate_X_predict(X)
indicators = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,
**_joblib_parallel_args(prefer='threads'))(
- delayed(tree.decision_path)(X,
- check_input=False)
+ delayed(tree.decision_path)(X, check_input=False)
for tree in self.estimators_)
n_nodes = [0]
@@ -288,7 +287,7 @@ def fit(self, X, y, sample_weight=None):
X = check_array(X, accept_sparse="csc", dtype=DTYPE)
y = check_array(y, accept_sparse='csc', ensure_2d=False, dtype=None)
if sample_weight is not None:
- sample_weight = check_array(sample_weight, ensure_2d=False)
+ sample_weight = _check_sample_weight(sample_weight, X)
if issparse(X):
# Pre-sort indices to avoid that each individual tree of the
# ensemble sorts the indices.
@@ -538,7 +537,8 @@ def _validate_y_class_weight(self, y):
y_store_unique_indices = np.zeros(y.shape, dtype=np.int)
for k in range(self.n_outputs_):
- classes_k, y_store_unique_indices[:, k] = np.unique(y[:, k], return_inverse=True)
+ classes_k, y_store_unique_indices[:, k] = \
+ np.unique(y[:, k], return_inverse=True)
self.classes_.append(classes_k)
self.n_classes_.append(classes_k.shape[0])
y = y_store_unique_indices
@@ -548,16 +548,18 @@ def _validate_y_class_weight(self, y):
if isinstance(self.class_weight, str):
if self.class_weight not in valid_presets:
raise ValueError('Valid presets for class_weight include '
- '"balanced" and "balanced_subsample". Given "%s".'
+ '"balanced" and "balanced_subsample".'
+ 'Given "%s".'
% self.class_weight)
if self.warm_start:
- warn('class_weight presets "balanced" or "balanced_subsample" are '
+ warn('class_weight presets "balanced" or '
+ '"balanced_subsample" are '
'not recommended for warm_start if the fitted data '
'differs from the full dataset. In order to use '
- '"balanced" weights, use compute_class_weight("balanced", '
- 'classes, y). In place of y you can use a large '
- 'enough sample of the full training set target to '
- 'properly estimate the class frequency '
+ '"balanced" weights, use compute_class_weight '
+ '("balanced", classes, y). In place of y you can use '
+ 'a large enough sample of the full training set '
+ 'target to properly estimate the class frequency '
'distributions. Pass the resulting weights as the '
'class_weight parameter.')
@@ -615,9 +617,9 @@ def predict_proba(self, X):
"""Predict class probabilities for X.
The predicted class probabilities of an input sample are computed as
- the mean predicted class probabilities of the trees in the forest. The
- class probability of a single tree is the fraction of samples of the same
- class in a leaf.
+ the mean predicted class probabilities of the trees in the forest.
+ The class probability of a single tree is the fraction of samples of
+ the same class in a leaf.
Parameters
----------
@@ -1559,8 +1561,9 @@ class ExtraTreesClassifier(ForestClassifier):
weights inversely proportional to class frequencies in the input data
as ``n_samples / (n_classes * np.bincount(y))``
- The "balanced_subsample" mode is the same as "balanced" except that weights are
- computed based on the bootstrap sample for every tree grown.
+ The "balanced_subsample" mode is the same as "balanced" except that
+ weights are computed based on the bootstrap sample for every tree
+ grown.
For multi-output, the weights of each column of y will be multiplied.
| <!--
Thanks for contributing a pull request! Please ensure you have taken a look at
the contribution guidelines: https://github.com/scikit-learn/scikit-learn/blob/master/CONTRIBUTING.md#pull-request-checklist
-->
#### Reference Issues/PRs
<!--
Example: Fixes #1234. See also #3456.
Please use keywords (e.g., Fixes) to create link to the issues or pull requests
you resolved, so that they will automatically be closed when your pull request
is merged. See https://github.com/blog/1506-closing-issues-via-pull-requests
-->
#15358
#### What does this implement/fix? Explain your changes.
Use _check_sample_weight to validate sample_weight in BaseForest.
Fixes trailing whitespace issues marked by flake8.
Worked on during the WIMLDS Bay Area Sprint with @lakrish
<!--
Please be aware that we are a loose team of volunteers so patience is
necessary; assistance handling other issues is very welcome. We value
all user contributions, no matter how minor they are. If we are slow to
review, either the pull request needs some benchmarking, tinkering,
convincing, etc. or more likely the reviewers are simply busy. In either
case, we ask for your understanding during the review process.
For more information, see our FAQ on this topic:
http://scikit-learn.org/dev/faq.html#why-is-my-pull-request-not-getting-any-attention.
Thanks for contributing!
-->
| https://api.github.com/repos/scikit-learn/scikit-learn/pulls/15492 | 2019-11-02T20:05:41Z | 2019-11-12T07:48:56Z | 2019-11-12T07:48:56Z | 2019-11-12T07:48:56Z | 1,068 | scikit-learn/scikit-learn | 46,327 |
Allow user to start work from DMs. | diff --git a/discord-bot/.env.example b/discord-bot/.env.example
index 5cd18facb9..ec114c8fee 100644
--- a/discord-bot/.env.example
+++ b/discord-bot/.env.example
@@ -1,7 +1,7 @@
BOT_TOKEN=<discord bot token>
DECLARE_GLOBAL_COMMANDS=<testing guild id>
OWNER_IDS=[<your user id>, <other user ids>]
-PREFIX="./"
+PREFIX="/" # Don't change, this allows for slash commands in DMs
OASST_API_URL="http://localhost:8080" # No trailing '/'
OASST_API_KEY=""
diff --git a/discord-bot/bot/extensions/work.py b/discord-bot/bot/extensions/work.py
index 19802c64ac..c905e7a08c 100644
--- a/discord-bot/bot/extensions/work.py
+++ b/discord-bot/bot/extensions/work.py
@@ -8,7 +8,6 @@
import lightbulb.decorators
import miru
from aiosqlite import Connection
-from bot.db.schemas import GuildSettings
from bot.utils import EMPTY
from loguru import logger
from oasst_shared.api_client import OasstApiClient, TaskType
@@ -31,8 +30,8 @@
type=str,
)
@lightbulb.command("work", "Complete a task.")
-@lightbulb.implements(lightbulb.SlashCommand)
-async def work(ctx: lightbulb.SlashContext):
+@lightbulb.implements(lightbulb.SlashCommand, lightbulb.PrefixCommand)
+async def work(ctx: lightbulb.Context):
"""Create and handle a task."""
# make sure the user isn't currently doing a task
currently_working: set[hikari.Snowflakeish] = ctx.bot.d.currently_working
@@ -55,7 +54,7 @@ async def work(ctx: lightbulb.SlashContext):
currently_working.remove(ctx.author.id)
-async def _handle_task(ctx: lightbulb.SlashContext, task_type: TaskRequestType) -> None:
+async def _handle_task(ctx: lightbulb.Context, task_type: TaskRequestType) -> None:
"""Handle creating and collecting user input for a task.
Continually present tasks to the user until they select one, cancel, or time out.
@@ -117,16 +116,16 @@ async def _handle_task(ctx: lightbulb.SlashContext, task_type: TaskRequestType)
else:
logger.critical(f"Unexpected task type received: {new_task.type}")
- # Send a message in the log channel that the task is complete
- # TODO: Maybe do something with the msg ID so users can rate the "answer"
- assert ctx.guild_id is not None
+ # Send a message in all the log channels that the task is complete
conn: Connection = ctx.bot.d.db
- guild_settings = await GuildSettings.from_db(conn, ctx.guild_id)
+ async with conn.cursor() as cursor:
+ await cursor.execute("SELECT log_channel_id FROM guild_settings")
+ log_channel_ids = await cursor.fetchall()
- if guild_settings is not None and guild_settings.log_channel_id is not None:
-
- channel = await ctx.bot.rest.fetch_channel(guild_settings.log_channel_id)
- assert isinstance(channel, hikari.TextableChannel) # option converter
+ channels = [
+ ctx.bot.cache.get_guild_channel(id[0]) or await ctx.bot.rest.fetch_channel(id[0])
+ for id in log_channel_ids
+ ]
done_embed = (
hikari.Embed(
@@ -140,7 +139,10 @@ async def _handle_task(ctx: lightbulb.SlashContext, task_type: TaskRequestType)
.add_field("Global Ranking", "0/0", inline=True)
.set_footer(f"Task ID: {task.id}")
)
- await channel.send(EMPTY, embed=done_embed)
+ # This will definitely get the bot rate limited, but that's a future problem
+ asyncio.gather(
+ *(ch.send(EMPTY, embed=done_embed) for ch in channels if isinstance(ch, hikari.TextableChannel))
+ )
# ask the user if they want to do another task
choice_view = ChoiceView(timeout=MAX_TASK_ACCEPT_TIME)
@@ -157,7 +159,7 @@ async def _handle_task(ctx: lightbulb.SlashContext, task_type: TaskRequestType)
async def _select_task(
- ctx: lightbulb.SlashContext, task_type: TaskRequestType, user: protocol_schema.User | None = None
+ ctx: lightbulb.Context, task_type: TaskRequestType, user: protocol_schema.User | None = None
) -> tuple[protocol_schema.Task | None, str]:
"""Present tasks to the user until they accept one, cancel, or time out."""
oasst_api: OasstApiClient = ctx.bot.d.oasst_api
@@ -196,7 +198,7 @@ async def _select_task(
async def _send_task(
- ctx: lightbulb.SlashContext, task: protocol_schema.Task
+ ctx: lightbulb.Context, task: protocol_schema.Task
) -> tuple[t.Literal["accept", "next", "cancel"] | None, str]:
"""Send a task to the user.
diff --git a/discord-bot/bot/settings.py b/discord-bot/bot/settings.py
index 24c837a34b..a2e2c2baad 100644
--- a/discord-bot/bot/settings.py
+++ b/discord-bot/bot/settings.py
@@ -8,7 +8,7 @@ class Settings(BaseSettings):
bot_token: str = Field(env="BOT_TOKEN", default="")
declare_global_commands: int = Field(env="DECLARE_GLOBAL_COMMANDS", default=0)
owner_ids: list[int] = Field(env="OWNER_IDS", default_factory=list)
- prefix: str = Field(env="PREFIX", default="./")
+ prefix: str = Field(env="PREFIX", default="/")
oasst_api_url: str = Field(env="OASST_API_URL", default="http://localhost:8080")
oasst_api_key: str = Field(env="OASST_API_KEY", default="")
| This also broadcasts task completion messages to every server. This needs to be changed in the future if the bot starts sending too many messages, but it should be fine right now. | https://api.github.com/repos/LAION-AI/Open-Assistant/pulls/267 | 2023-01-02T08:46:13Z | 2023-01-03T10:31:33Z | 2023-01-03T10:31:33Z | 2023-01-04T01:22:17Z | 1,374 | LAION-AI/Open-Assistant | 37,847 |
Improve single-binary method wording | diff --git a/docs/README.md b/docs/README.md
index 1fc8f444f8..978168c7d3 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -217,7 +217,7 @@ $ pacman -Syu
#### Single binary executables
-Have a standalone HTTPie executable when you don't want to go through the full installation process
+Get the standalone HTTPie Linux executables when you don't want to go through the full installation process
```bash
# Install httpie
diff --git a/docs/installation/methods.yml b/docs/installation/methods.yml
index ae7f69740b..f126d8d8bf 100644
--- a/docs/installation/methods.yml
+++ b/docs/installation/methods.yml
@@ -186,7 +186,7 @@ tools:
single-binary:
title: Single binary executables
name: Single binary executables
- note: Have a standalone HTTPie executable when you don't want to go through the full installation process.
+ note: Get the standalone HTTPie Linux executables when you don't want to go through the full installation process.
links:
commands:
install:
| Call to action wording. | https://api.github.com/repos/httpie/cli/pulls/1399 | 2022-05-10T15:28:23Z | 2022-05-10T16:55:32Z | 2022-05-10T16:55:31Z | 2022-05-10T17:33:31Z | 265 | httpie/cli | 34,059 |
Adding config files for convnext | diff --git a/src/transformers/models/convnext/configuration_convnext.py b/src/transformers/models/convnext/configuration_convnext.py
index 8e435b1ed1d9a..d214cfcb1bd67 100644
--- a/src/transformers/models/convnext/configuration_convnext.py
+++ b/src/transformers/models/convnext/configuration_convnext.py
@@ -67,12 +67,14 @@ class ConvNextConfig(PretrainedConfig):
Example:
```python
- >>> from transformers import ConvNextModel, ConvNextConfig
+ >>> from transformers import ConvNextConfig, ConvNextModel
>>> # Initializing a ConvNext convnext-tiny-224 style configuration
>>> configuration = ConvNextConfig()
- >>> # Initializing a model from the convnext-tiny-224 style configuration
+
+ >>> # Initializing a model (with random weights) from the convnext-tiny-224 style configuration
>>> model = ConvNextModel(configuration)
+
>>> # Accessing the model configuration
>>> configuration = model.config
```"""
diff --git a/utils/documentation_tests.txt b/utils/documentation_tests.txt
index e1a33d70cafa3..78b595df0e851 100644
--- a/utils/documentation_tests.txt
+++ b/utils/documentation_tests.txt
@@ -37,6 +37,7 @@ src/transformers/models/codegen/configuration_codegen.py
src/transformers/models/conditional_detr/configuration_conditional_detr.py
src/transformers/models/conditional_detr/modeling_conditional_detr.py
src/transformers/models/convbert/configuration_convbert.py
+src/transformers/models/convnext/configuration_convnext.py
src/transformers/models/convnext/modeling_convnext.py
src/transformers/models/ctrl/configuration_ctrl.py
src/transformers/models/ctrl/modeling_ctrl.py
@@ -79,7 +80,7 @@ src/transformers/models/longt5/modeling_longt5.py
src/transformers/models/marian/modeling_marian.py
src/transformers/models/markuplm/modeling_markuplm.py
src/transformers/models/mbart/modeling_mbart.py
-src/transformers/models/megatron_bert/configuration_megatron_bert.py
+src/transformers/models/megatron_bert/configuration_megatron_bert.py
src/transformers/models/mobilebert/configuration_mobilebert.py
src/transformers/models/mobilebert/modeling_mobilebert.py
src/transformers/models/mobilebert/modeling_tf_mobilebert.py
| # What does this PR do?
Adding config files for convnext
Based on issue: https://github.com/huggingface/transformers/issues/19487
@ydshieh could you please check it?
Thanks :) | https://api.github.com/repos/huggingface/transformers/pulls/19717 | 2022-10-18T12:35:02Z | 2022-10-18T15:10:10Z | 2022-10-18T15:10:10Z | 2022-10-18T15:12:43Z | 546 | huggingface/transformers | 12,811 |
[dashboard] Add `RAY_CLUSTER_ACTIVITY_HOOK` to `/api/component_activities` | diff --git a/dashboard/consts.py b/dashboard/consts.py
index a81d3ebc73c23..0380dbbc079e8 100644
--- a/dashboard/consts.py
+++ b/dashboard/consts.py
@@ -36,3 +36,10 @@
BAD_RUNTIME_ENV_CACHE_TTL_SECONDS = env_integer(
"BAD_RUNTIME_ENV_CACHE_TTL_SECONDS", 60 * 10
)
+# Hook that is invoked on the dashboard `/api/component_activities` endpoint.
+# Environment variable stored here should be a callable that does not
+# take any arguments and should return a dictionary mapping
+# activity component type (str) to
+# ray.dashboard.modules.snapshot.snapshot_head.RayActivityResponse.
+# Example: "your.module.ray_cluster_activity_hook".
+RAY_CLUSTER_ACTIVITY_HOOK = "RAY_CLUSTER_ACTIVITY_HOOK"
diff --git a/dashboard/modules/snapshot/component_activities_schema.json b/dashboard/modules/snapshot/component_activities_schema.json
index 8eb93c8816bc8..57fda426987ad 100644
--- a/dashboard/modules/snapshot/component_activities_schema.json
+++ b/dashboard/modules/snapshot/component_activities_schema.json
@@ -7,7 +7,8 @@
"type": "object",
"properties":{
"is_active": {
- "type": "boolean"
+ "type": "string",
+ "enum": ["ACTIVE", "INACTIVE", "ERROR"]
},
"reason": {
"type": [
diff --git a/dashboard/modules/snapshot/snapshot_head.py b/dashboard/modules/snapshot/snapshot_head.py
index 5b24c2a0fee51..7f6de4ed57cf2 100644
--- a/dashboard/modules/snapshot/snapshot_head.py
+++ b/dashboard/modules/snapshot/snapshot_head.py
@@ -2,16 +2,21 @@
import concurrent.futures
import dataclasses
from datetime import datetime
+import enum
+import logging
import hashlib
import json
+import os
from typing import Any, Dict, List, Optional
import aiohttp.web
import ray
+from ray.dashboard.consts import RAY_CLUSTER_ACTIVITY_HOOK
import ray.dashboard.optional_utils as dashboard_optional_utils
import ray.dashboard.utils as dashboard_utils
from ray._private import ray_constants
+from ray._private.storage import _load_class
from ray.core.generated import gcs_pb2, gcs_service_pb2, gcs_service_pb2_grpc
from ray.dashboard.modules.job.common import JOB_ID_METADATA_KEY, JobInfoStorageClient
from ray.experimental.internal_kv import (
@@ -22,9 +27,18 @@
from ray.job_submission import JobInfo
from ray.runtime_env import RuntimeEnv
+logger = logging.getLogger(__name__)
+logger.setLevel(logging.INFO)
+
routes = dashboard_optional_utils.ClassMethodRouteTable
+class RayActivityStatus(str, enum.Enum):
+ ACTIVE = "ACTIVE"
+ INACTIVE = "INACTIVE"
+ ERROR = "ERROR"
+
+
@dataclasses.dataclass
class RayActivityResponse:
"""
@@ -32,11 +46,12 @@ class RayActivityResponse:
active, and metadata about observation.
"""
- # Whether the corresponding Ray component is considered active
- is_active: bool
- # Reason if Ray component is considered active
+ # Whether the corresponding Ray component is considered active or inactive,
+ # or if there was an error while collecting this observation.
+ is_active: RayActivityStatus
+ # Reason if Ray component is considered active or errored.
reason: Optional[str] = None
- # Timestamp of when this observation about the Ray component was made
+ # Timestamp of when this observation about the Ray component was made.
timestamp: Optional[float] = None
@@ -108,16 +123,64 @@ async def snapshot(self, req):
@routes.get("/api/component_activities")
async def get_component_activities(self, req) -> aiohttp.web.Response:
- # Get activity information for driver
timeout = req.query.get("timeout", None)
if timeout and timeout.isdigit():
timeout = int(timeout)
else:
timeout = 5
+ # Get activity information for driver
driver_activity_info = await self._get_job_activity_info(timeout=timeout)
-
resp = {"driver": dataclasses.asdict(driver_activity_info)}
+
+ if RAY_CLUSTER_ACTIVITY_HOOK in os.environ:
+ try:
+ cluster_activity_callable = _load_class(
+ os.environ[RAY_CLUSTER_ACTIVITY_HOOK]
+ )
+ external_activity_output = cluster_activity_callable()
+ assert isinstance(external_activity_output, dict), (
+ f"Output of hook {os.environ[RAY_CLUSTER_ACTIVITY_HOOK]} "
+ "should be Dict[str, RayActivityResponse]. Got "
+ f"output: {external_activity_output}"
+ )
+ for component_type in external_activity_output:
+ try:
+ component_activity_output = external_activity_output[
+ component_type
+ ]
+ # Cast output to type RayActivityResponse
+ component_activity_output = RayActivityResponse(
+ **dataclasses.asdict(component_activity_output)
+ )
+ # Validate is_active field is of type RayActivityStatus
+ component_activity_output.is_active = RayActivityStatus[
+ component_activity_output.is_active
+ ]
+ resp[component_type] = dataclasses.asdict(
+ component_activity_output
+ )
+ except Exception as e:
+ logger.exception(
+ f"Failed to get activity status of {component_type} "
+ f"from user hook {os.environ[RAY_CLUSTER_ACTIVITY_HOOK]}."
+ )
+ resp[component_type] = {
+ "is_active": RayActivityStatus.ERROR,
+ "reason": repr(e),
+ "timestamp": datetime.now().timestamp(),
+ }
+ except Exception as e:
+ logger.exception(
+ "Failed to get activity status from user "
+ f"hook {os.environ[RAY_CLUSTER_ACTIVITY_HOOK]}."
+ )
+ resp["external_component"] = {
+ "is_active": RayActivityStatus.ERROR,
+ "reason": repr(e),
+ "timestamp": datetime.now().timestamp(),
+ }
+
return aiohttp.web.Response(
text=json.dumps(resp),
content_type="application/json",
@@ -128,25 +191,40 @@ async def _get_job_activity_info(self, timeout: int) -> RayActivityResponse:
# Returns if there is Ray activity from drivers (job).
# Drivers in namespaces that start with _ray_internal_job_info_ are not
# considered activity.
- request = gcs_service_pb2.GetAllJobInfoRequest()
- reply = await self._gcs_job_info_stub.GetAllJobInfo(request, timeout=timeout)
+ try:
+ request = gcs_service_pb2.GetAllJobInfoRequest()
+ reply = await self._gcs_job_info_stub.GetAllJobInfo(
+ request, timeout=timeout
+ )
- num_active_drivers = 0
- for job_table_entry in reply.job_info_list:
- is_dead = bool(job_table_entry.is_dead)
- in_internal_namespace = job_table_entry.config.ray_namespace.startswith(
- JobInfoStorageClient.JOB_DATA_KEY_PREFIX
+ num_active_drivers = 0
+ for job_table_entry in reply.job_info_list:
+ is_dead = bool(job_table_entry.is_dead)
+ in_internal_namespace = job_table_entry.config.ray_namespace.startswith(
+ JobInfoStorageClient.JOB_DATA_KEY_PREFIX
+ )
+ if not is_dead and not in_internal_namespace:
+ num_active_drivers += 1
+
+ is_active = (
+ RayActivityStatus.ACTIVE
+ if num_active_drivers > 0
+ else RayActivityStatus.INACTIVE
+ )
+ return RayActivityResponse(
+ is_active=is_active,
+ reason=f"Number of active drivers: {num_active_drivers}"
+ if num_active_drivers
+ else None,
+ timestamp=datetime.now().timestamp(),
+ )
+ except Exception as e:
+ logger.exception("Failed to get activity status of Ray drivers.")
+ return RayActivityResponse(
+ is_active=RayActivityStatus.ERROR,
+ reason=repr(e),
+ timestamp=datetime.now().timestamp(),
)
- if not is_dead and not in_internal_namespace:
- num_active_drivers += 1
-
- return RayActivityResponse(
- is_active=num_active_drivers > 0,
- reason=f"Number of active drivers: {num_active_drivers}"
- if num_active_drivers
- else None,
- timestamp=datetime.now().timestamp(),
- )
def _get_job_info(self, metadata: Dict[str, str]) -> Optional[JobInfo]:
# If a job submission ID has been added to a job, the status is
diff --git a/dashboard/modules/snapshot/tests/test_snapshot.py b/dashboard/modules/snapshot/tests/test_snapshot.py
index 0789f5c601c39..d024a929c9a1e 100644
--- a/dashboard/modules/snapshot/tests/test_snapshot.py
+++ b/dashboard/modules/snapshot/tests/test_snapshot.py
@@ -18,12 +18,51 @@
run_string_as_driver_nonblocking,
)
from ray.dashboard import dashboard
+from ray.dashboard.consts import RAY_CLUSTER_ACTIVITY_HOOK
from ray.dashboard.modules.snapshot.snapshot_head import RayActivityResponse
from ray.dashboard.tests.conftest import * # noqa
-def test_inactive_component_activities(call_ray_start):
- # Verify no activity in response if no active drivers
+@pytest.fixture
+def set_ray_cluster_activity_hook(request):
+ """
+ Fixture that sets RAY_CLUSTER_ACTIVITY_HOOK environment variable
+ for test_e2e_component_activities_hook.
+ """
+ external_hook = getattr(request, "param")
+ assert (
+ external_hook
+ ), "Please pass value of RAY_CLUSTER_ACTIVITY_HOOK env var to this fixture"
+ old_hook = os.environ.get(RAY_CLUSTER_ACTIVITY_HOOK)
+ os.environ[RAY_CLUSTER_ACTIVITY_HOOK] = external_hook
+
+ yield external_hook
+
+ if old_hook is not None:
+ os.environ[RAY_CLUSTER_ACTIVITY_HOOK] = old_hook
+ else:
+ del os.environ[RAY_CLUSTER_ACTIVITY_HOOK]
+
+
+@pytest.mark.parametrize(
+ "set_ray_cluster_activity_hook",
+ [
+ "ray._private.test_utils.external_ray_cluster_activity_hook1",
+ "ray._private.test_utils.external_ray_cluster_activity_hook2",
+ "ray._private.test_utils.external_ray_cluster_activity_hook3",
+ "ray._private.test_utils.external_ray_cluster_activity_hook4",
+ ],
+ indirect=True,
+)
+async def test_component_activities_hook(set_ray_cluster_activity_hook, call_ray_start):
+ """
+ Tests /api/component_activities returns correctly for various
+ responses of RAY_CLUSTER_ACTIVITY_HOOK defined in ray._private.test_utils.
+
+ Verify no active drivers are correctly reflected in response.
+ """
+ external_hook = set_ray_cluster_activity_hook
+
response = requests.get("http://127.0.0.1:8265/api/component_activities")
response.raise_for_status()
@@ -36,15 +75,46 @@ def test_inactive_component_activities(call_ray_start):
pprint.pprint(data)
jsonschema.validate(instance=data, schema=json.load(open(schema_path)))
- # Validate ray_activity_response field can be cast to RayActivityResponse object
+ # Validate driver response can be cast to RayActivityResponse object
+ # and that there are no active drivers.
driver_ray_activity_response = RayActivityResponse(**data["driver"])
- assert not driver_ray_activity_response.is_active
+ assert driver_ray_activity_response.is_active == "INACTIVE"
assert driver_ray_activity_response.reason is None
+ # Validate external component response can be cast to RayActivityResponse object
+ if external_hook[-1] == "4":
+ external_activity_response = RayActivityResponse(**data["external_component"])
+ assert external_activity_response.is_active == "ERROR"
+ assert (
+ external_activity_response.reason
+ == "Exception('Error in external cluster activity hook')"
+ )
+ elif external_hook[-1] == "3":
+ external_activity_response = RayActivityResponse(**data["external_component"])
+ assert external_activity_response.is_active == "ERROR"
+ elif external_hook[-1] == "2":
+ external_activity_response = RayActivityResponse(**data["test_component2"])
+ assert external_activity_response.is_active == "ERROR"
+ elif external_hook[-1] == "1":
+ external_activity_response = RayActivityResponse(**data["test_component1"])
+ assert external_activity_response.is_active == "ACTIVE"
+ assert external_activity_response.reason == "Counter: 1"
+
+ # Call endpoint again to validate different response
+ response = requests.get("http://127.0.0.1:8265/api/component_activities")
+ response.raise_for_status()
+ data = response.json()
+ jsonschema.validate(instance=data, schema=json.load(open(schema_path)))
+
+ external_activity_response = RayActivityResponse(**data["test_component1"])
+ assert external_activity_response.is_active == "ACTIVE"
+ assert external_activity_response.reason == "Counter: 2"
+
def test_active_component_activities(ray_start_with_dashboard):
# Verify drivers which don't have namespace starting with _ray_internal_job_info_
# are considered active.
+
driver_template = """
import ray
@@ -77,7 +147,7 @@ def test_active_component_activities(ray_start_with_dashboard):
# Validate ray_activity_response field can be cast to RayActivityResponse object
driver_ray_activity_response = RayActivityResponse(**data["driver"])
- assert driver_ray_activity_response.is_active
+ assert driver_ray_activity_response.is_active == "ACTIVE"
# Drivers with namespace starting with "_ray_internal_job_info_" are not
# considered active drivers. Three active drivers are the two
# run with namespace "my_namespace" and the one started
diff --git a/python/ray/_private/test_utils.py b/python/ray/_private/test_utils.py
index 52b00672679b0..a780165eaf9f3 100644
--- a/python/ray/_private/test_utils.py
+++ b/python/ray/_private/test_utils.py
@@ -1,4 +1,5 @@
import asyncio
+import dataclasses
import fnmatch
import functools
import io
@@ -1366,3 +1367,67 @@ def job_hook(**kwargs):
cmd = " ".join(kwargs["entrypoint"])
print(f"hook intercepted: {cmd}")
sys.exit(0)
+
+
+@dataclasses.dataclass
+class TestRayActivityResponse:
+ """
+ Redefinition of dashboard.modules.snapshot.snapshot_head.RayActivityResponse
+ used in test_component_activities_hook to mimic typical
+ usage of redefining or extending response type.
+ """
+
+ is_active: str
+ reason: Optional[str] = None
+ timestamp: Optional[float] = None
+
+
+# Global counter to test different return values
+# for external_ray_cluster_activity_hook1.
+ray_cluster_activity_hook_counter = 0
+
+
+def external_ray_cluster_activity_hook1():
+ """
+ Example external hook for test_component_activities_hook.
+
+ Returns valid response and increments counter in `reason`
+ field on each call.
+ """
+ global ray_cluster_activity_hook_counter
+ ray_cluster_activity_hook_counter += 1
+ return {
+ "test_component1": TestRayActivityResponse(
+ is_active="ACTIVE",
+ reason=f"Counter: {ray_cluster_activity_hook_counter}",
+ )
+ }
+
+
+def external_ray_cluster_activity_hook2():
+ """
+ Example external hook for test_component_activities_hook.
+
+ Returns invalid output because the value of `test_component2`
+ should be of type RayActivityResponse.
+ """
+ return {"test_component2": "bad_output"}
+
+
+def external_ray_cluster_activity_hook3():
+ """
+ Example external hook for test_component_activities_hook.
+
+ Returns invalid output because return type is not
+ Dict[str, RayActivityResponse]
+ """
+ return "bad_output"
+
+
+def external_ray_cluster_activity_hook4():
+ """
+ Example external hook for test_component_activities_hook.
+
+ Errors during execution.
+ """
+ raise Exception("Error in external cluster activity hook")
| <!-- Thank you for your contribution! Please review https://github.com/ray-project/ray/blob/master/CONTRIBUTING.rst before opening a pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this solves. -->
- Add external hook to `/api/component_activities` endpoint in dashboard snapshot router
- Change `is_active` field of `RayActivityResponse` to take an enum `RayActivityStatus` instead of `bool`. This is a backward incompatible change, but should be ok because https://github.com/ray-project/ray/pull/25996 wasn't included in any branch cuts. `RayActivityResponse` now supports informing when there was an error getting the activity observation and the reason.
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've run `scripts/format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed for https://docs.ray.io/en/master/.
- [ ] I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
- Testing Strategy
- [x] Unit tests
- [ ] Release tests
- [ ] This PR is not tested :(
| https://api.github.com/repos/ray-project/ray/pulls/26297 | 2022-07-05T11:13:01Z | 2022-07-08T17:51:59Z | 2022-07-08T17:51:59Z | 2022-07-08T17:51:59Z | 3,563 | ray-project/ray | 19,164 |
Styles Grouping/Sorting #1770 | diff --git a/css/style.css b/css/style.css
index 010c8e7f6..3cc1e5e52 100644
--- a/css/style.css
+++ b/css/style.css
@@ -218,3 +218,48 @@
#stylePreviewOverlay.lower-half {
transform: translate(-140px, -140px);
}
+
+/* scrollable box for style selections */
+.contain .tabs {
+ height: 100%;
+}
+
+.contain .tabs .tabitem.style_selections_tab {
+ height: 100%;
+}
+
+.contain .tabs .tabitem.style_selections_tab > div:first-child {
+ height: 100%;
+}
+
+.contain .tabs .tabitem.style_selections_tab .style_selections {
+ min-height: 200px;
+ height: 100%;
+}
+
+.contain .tabs .tabitem.style_selections_tab .style_selections .wrap[data-testid="checkbox-group"] {
+ position: absolute; /* remove this to disable scrolling within the checkbox-group */
+ overflow: auto;
+ padding-right: 2px;
+ max-height: 100%;
+}
+
+.contain .tabs .tabitem.style_selections_tab .style_selections .wrap[data-testid="checkbox-group"] label {
+ /* max-width: calc(35% - 15px) !important; */ /* add this to enable 3 columns layout */
+ flex: calc(50% - 5px) !important;
+}
+
+.contain .tabs .tabitem.style_selections_tab .style_selections .wrap[data-testid="checkbox-group"] label span {
+ /* white-space:nowrap; */ /* add this to disable text wrapping (better choice for 3 columns layout) */
+ overflow: hidden;
+ text-overflow: ellipsis;
+}
+
+/* styles preview tooltip */
+.preview-tooltip {
+ background-color: #fff8;
+ font-family: monospace;
+ text-align: center;
+ border-radius-top: 5px;
+ display: none; /* remove this to enable tooltip in preview image */
+}
\ No newline at end of file
diff --git a/javascript/script.js b/javascript/script.js
index 8f4cac58f..9aa0b5c16 100644
--- a/javascript/script.js
+++ b/javascript/script.js
@@ -150,9 +150,12 @@ function initStylePreviewOverlay() {
let overlayVisible = false;
const samplesPath = document.querySelector("meta[name='samples-path']").getAttribute("content")
const overlay = document.createElement('div');
+ const tooltip = document.createElement('div');
+ tooltip.className = 'preview-tooltip';
+ overlay.appendChild(tooltip);
overlay.id = 'stylePreviewOverlay';
document.body.appendChild(overlay);
- document.addEventListener('mouseover', function(e) {
+ document.addEventListener('mouseover', function (e) {
const label = e.target.closest('.style_selections label');
if (!label) return;
label.removeEventListener("mouseout", onMouseLeave);
@@ -162,9 +165,12 @@ function initStylePreviewOverlay() {
const originalText = label.querySelector("span").getAttribute("data-original-text");
const name = originalText || label.querySelector("span").textContent;
overlay.style.backgroundImage = `url("${samplesPath.replace(
- "fooocus_v2",
- name.toLowerCase().replaceAll(" ", "_")
+ "fooocus_v2",
+ name.toLowerCase().replaceAll(" ", "_")
).replaceAll("\\", "\\\\")}")`;
+
+ tooltip.textContent = name;
+
function onMouseLeave() {
overlayVisible = false;
overlay.style.opacity = "0";
@@ -172,8 +178,8 @@ function initStylePreviewOverlay() {
label.removeEventListener("mouseout", onMouseLeave);
}
});
- document.addEventListener('mousemove', function(e) {
- if(!overlayVisible) return;
+ document.addEventListener('mousemove', function (e) {
+ if (!overlayVisible) return;
overlay.style.left = `${e.clientX}px`;
overlay.style.top = `${e.clientY}px`;
overlay.className = e.clientY > window.innerHeight / 2 ? "lower-half" : "upper-half";
diff --git a/webui.py b/webui.py
index 80b1a3d8b..de5569155 100644
--- a/webui.py
+++ b/webui.py
@@ -300,7 +300,7 @@ def update_history_link():
history_link = gr.HTML()
shared.gradio_root.load(update_history_link, outputs=history_link, queue=False, show_progress=False)
- with gr.Tab(label='Style'):
+ with gr.Tab(label='Style', elem_classes=['style_selections_tab']):
style_sorter.try_load_sorted_styles(
style_names=legal_style_names,
default_selected=modules.config.default_styles)
| As described in https://github.com/lllyasviel/Fooocus/issues/1770#issuecomment-1887719446 | https://api.github.com/repos/lllyasviel/Fooocus/pulls/1883 | 2024-01-11T18:49:43Z | 2024-03-11T15:35:03Z | 2024-03-11T15:35:03Z | 2024-03-11T15:35:03Z | 1,041 | lllyasviel/Fooocus | 7,025 |
Moved probability to active projects | diff --git a/eop/__init__.py b/active_projects/eop/__init__.py
similarity index 100%
rename from eop/__init__.py
rename to active_projects/eop/__init__.py
diff --git a/eop/bayes.py b/active_projects/eop/bayes.py
similarity index 100%
rename from eop/bayes.py
rename to active_projects/eop/bayes.py
diff --git a/eop/bayes_footnote.py b/active_projects/eop/bayes_footnote.py
similarity index 100%
rename from eop/bayes_footnote.py
rename to active_projects/eop/bayes_footnote.py
diff --git a/eop/combinations.py b/active_projects/eop/combinations.py
similarity index 100%
rename from eop/combinations.py
rename to active_projects/eop/combinations.py
diff --git a/eop/independence.py b/active_projects/eop/independence.py
similarity index 100%
rename from eop/independence.py
rename to active_projects/eop/independence.py
| https://api.github.com/repos/3b1b/manim/pulls/155 | 2018-03-08T22:17:47Z | 2018-03-08T22:17:54Z | 2018-03-08T22:17:54Z | 2018-03-08T22:17:57Z | 236 | 3b1b/manim | 18,469 |
|
Release v0.2.15 & Fix load-8bit & Improve documentations | diff --git a/README.md b/README.md
index 8c3ce53503..cee16abb8d 100644
--- a/README.md
+++ b/README.md
@@ -205,10 +205,6 @@ CUDA_VISIBLE_DEVICES=1 python3 -m fastchat.serve.model_worker --model-path lmsys
```bash
python3 -m fastchat.serve.gradio_web_server_multi
```
-- You can protect your webserver with Gradio's Authentication with a password file. The password file should contain one or more "user:password" pairs in this format: `u1:p1,u2:p2,u3:p3`
-```bash
-python3 -m fastchat.serve.gradio_web_server --gradio-auth-path login.txt
-```
## API
### OpenAI-Compatible RESTful APIs & SDK
diff --git a/docs/langchain_integration.md b/docs/langchain_integration.md
index db9e7a124d..a59d739ab1 100644
--- a/docs/langchain_integration.md
+++ b/docs/langchain_integration.md
@@ -50,11 +50,7 @@ If you meet the following OOM error while creating embeddings, please set a smal
openai.error.APIError: Invalid response object from API: '{"object":"error","message":"**NETWORK ERROR DUE TO HIGH TRAFFIC. PLEASE REGENERATE OR REFRESH THIS PAGE.**\\n\\n(CUDA out of memory. Tried to allocate xxx MiB (GPU 0; xxx GiB total capacity; xxx GiB already allocated; xxx MiB free; xxx GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF)","code":50002}' (HTTP response code was 400)
~~~
-You can try
-
-~~~bash
-export FASTCHAT_WORKER_API_EMBEDDING_BATCH_SIZE=1
-~~~
+You can try `export FASTCHAT_WORKER_API_EMBEDDING_BATCH_SIZE=1`.
## Try local LangChain
diff --git a/fastchat/model/compression.py b/fastchat/model/compression.py
index 886eba27c4..2880384f01 100644
--- a/fastchat/model/compression.py
+++ b/fastchat/model/compression.py
@@ -5,7 +5,7 @@
from accelerate import init_empty_weights
from accelerate.utils import set_module_tensor_to_device
-from huggingface_hub import hf_hub_download
+from huggingface_hub import snapshot_download
import torch
from torch import Tensor
from torch.nn import functional as F
@@ -123,8 +123,7 @@ def load_compress_model(
base_pattern = os.path.join(model_path, "pytorch_model*.bin")
else:
# `model_path` is a cached Hugging Face repo
- cf = hf_hub_download(model_path, "config.json")
- model_path = os.path.dirname(cf)
+ model_path = snapshot_download(model_path, revision=revision)
base_pattern = os.path.join(model_path, "pytorch_model*.bin")
files = glob.glob(base_pattern)
| https://api.github.com/repos/lm-sys/FastChat/pulls/1730 | 2023-06-18T08:36:10Z | 2023-06-18T08:36:39Z | 2023-06-18T08:36:39Z | 2023-06-18T09:33:55Z | 686 | lm-sys/FastChat | 41,763 |
|
Create TowerOfHanoi.py | diff --git a/TowerOfHanoi.py b/TowerOfHanoi.py
new file mode 100644
index 0000000000..af89032fce
--- /dev/null
+++ b/TowerOfHanoi.py
@@ -0,0 +1,14 @@
+# Recursive Python function to solve the tower of hanoi --
+
+def TowerOfHanoi(n , source, destination, auxiliary):
+ if n==1:
+ print ("Move disk 1 from source",source,"to destination",destination)
+ return
+ TowerOfHanoi(n-1, source, auxiliary, destination)
+ print ("Move disk",n,"from source",source,"to destination",destination)
+ TowerOfHanoi(n-1, auxiliary, destination, source)
+
+# Driver code
+n = 4
+TowerOfHanoi(n,'A','B','C')
+# A, C, B are the name of rods
| Tower of Hanoi using Python | https://api.github.com/repos/geekcomputers/Python/pulls/1988 | 2023-10-03T10:38:14Z | 2023-10-07T13:40:42Z | 2023-10-07T13:40:42Z | 2023-10-07T13:40:42Z | 203 | geekcomputers/Python | 31,376 |
Fixing return None for Radio and Selectbox | diff --git a/lib/streamlit/DeltaGenerator.py b/lib/streamlit/DeltaGenerator.py
index 6a63a8df1f51..b9dd5e17f9cf 100644
--- a/lib/streamlit/DeltaGenerator.py
+++ b/lib/streamlit/DeltaGenerator.py
@@ -1809,7 +1809,8 @@ def radio(self, element, label, options, index=0, format_func=str, key=None):
ui_value = _get_widget_ui_value("radio", element, user_key=key)
current_value = ui_value if ui_value is not None else index
- return options[current_value] if len(options) > 0 else NoValue
+
+ return options[current_value] if len(options) > 0 and options[current_value] is not None else NoValue
@_with_element
def selectbox(self, element, label, options, index=0, format_func=str, key=None):
@@ -1863,7 +1864,8 @@ def selectbox(self, element, label, options, index=0, format_func=str, key=None)
ui_value = _get_widget_ui_value("selectbox", element, user_key=key)
current_value = ui_value if ui_value is not None else index
- return options[current_value] if len(options) > 0 else NoValue
+
+ return options[current_value] if len(options) > 0 and options[current_value] is not None else NoValue
@_with_element
def slider(
diff --git a/lib/tests/streamlit/radio_test.py b/lib/tests/streamlit/radio_test.py
index c055ce287c75..09bcdd44157a 100644
--- a/lib/tests/streamlit/radio_test.py
+++ b/lib/tests/streamlit/radio_test.py
@@ -43,6 +43,12 @@ def test_valid_value(self):
self.assertEqual(c.label, "the label")
self.assertEqual(c.default, 1)
+ def test_noneType_option(self):
+ """Test NoneType option value."""
+ current_value = st.radio("the label", (None, "selected"), 0)
+
+ self.assertEqual(current_value, None)
+
@parameterized.expand(
[
(("m", "f"), ["m", "f"]),
diff --git a/lib/tests/streamlit/selectbox_test.py b/lib/tests/streamlit/selectbox_test.py
index 725787214d09..ae18e3fdb352 100644
--- a/lib/tests/streamlit/selectbox_test.py
+++ b/lib/tests/streamlit/selectbox_test.py
@@ -43,6 +43,12 @@ def test_valid_value(self):
self.assertEqual(c.label, "the label")
self.assertEqual(c.default, 1)
+ def test_noneType_option(self):
+ """Test NoneType option value."""
+ current_value = st.selectbox("the label", (None, "selected"), 0)
+
+ self.assertEqual(current_value, None)
+
@parameterized.expand(
[
(("m", "f"), ["m", "f"]),
| **Issue:** This PR fixes #982
**Description:** Using NoValue instead of None
---
**Contribution License Agreement**
By submiting this pull request you agree that all contributions to this project are made under the Apache 2.0 license.
| https://api.github.com/repos/streamlit/streamlit/pulls/998 | 2020-01-20T19:17:04Z | 2020-01-21T17:08:59Z | 2020-01-21T17:08:59Z | 2020-01-21T17:09:03Z | 666 | streamlit/streamlit | 22,067 |
Updated README.md | diff --git a/forefront/README.md b/forefront/README.md
index 5b084af534..a2be818705 100644
--- a/forefront/README.md
+++ b/forefront/README.md
@@ -4,7 +4,7 @@
import forefront
# create an account
-token = forefront.Account.create(logging=True)
+token = forefront.Account.create(logging=False)
print(token)
# get a response
@@ -12,4 +12,5 @@ for response in forefront.StreamingCompletion.create(token = token,
prompt = 'hello world', model='gpt-4'):
print(response.completion.choices[0].text, end = '')
-```
\ No newline at end of file
+print("")
+```
| There is no need for logging as it takes so much space of terminal window. Also, added a print statement to add a line at the end of the Bot's response. | https://api.github.com/repos/xtekky/gpt4free/pulls/235 | 2023-04-28T04:10:43Z | 2023-04-28T08:08:27Z | 2023-04-28T08:08:27Z | 2023-04-28T08:08:28Z | 166 | xtekky/gpt4free | 37,885 |
git 3.1.30 api change, issue #8116 | diff --git a/modules/extensions.py b/modules/extensions.py
index 3eef9eaf65d..ed4b58fe39c 100644
--- a/modules/extensions.py
+++ b/modules/extensions.py
@@ -66,7 +66,7 @@ def list_files(self, subdir, extension):
def check_updates(self):
repo = git.Repo(self.path)
- for fetch in repo.remote().fetch("--dry-run"):
+ for fetch in repo.remote().fetch(dry_run=True):
if fetch.flags != fetch.HEAD_UPTODATE:
self.can_update = True
self.status = "behind"
@@ -79,8 +79,8 @@ def fetch_and_reset_hard(self):
repo = git.Repo(self.path)
# Fix: `error: Your local changes to the following files would be overwritten by merge`,
# because WSL2 Docker set 755 file permissions instead of 644, this results to the error.
- repo.git.fetch('--all')
- repo.git.reset('--hard', 'origin')
+ repo.git.fetch(all=True)
+ repo.git.reset('origin', hard=True)
def list_extensions():
| this is a fix for #8116 , where gitpython made a breaking api change:
> [per this](https://github.com/gitpython-developers/GitPython/pull/1518) and [this changelog](https://github.com/gitpython-developers/GitPython/pull/1518) you can no longer feed arbitrary arguments to prevent remote code execution.
Running on ubuntu 22 wsl
Tested with installs/updates, works as expected
<sup>edited to reference and auto-close:</sup> fixes #8116, fixes #8199, fixes #8116 | https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/8118 | 2023-02-25T19:30:33Z | 2023-03-11T14:47:52Z | 2023-03-11T14:47:52Z | 2023-03-11T15:36:14Z | 250 | AUTOMATIC1111/stable-diffusion-webui | 40,385 |
add conv system in cli | diff --git a/fastchat/conversation.py b/fastchat/conversation.py
index d41af70f32..71885dc4d5 100644
--- a/fastchat/conversation.py
+++ b/fastchat/conversation.py
@@ -257,7 +257,7 @@ def copy(self):
def dict(self):
return {
"template_name": self.name,
- "system": self.system_message,
+ "system_message": self.system_message,
"roles": self.roles,
"messages": self.messages,
"offset": self.offset,
diff --git a/fastchat/serve/cli.py b/fastchat/serve/cli.py
index 0a760e3051..1bee8cd454 100644
--- a/fastchat/serve/cli.py
+++ b/fastchat/serve/cli.py
@@ -208,6 +208,7 @@ def main(args):
args.load_8bit,
args.cpu_offloading,
args.conv_template,
+ args.conv_system_msg,
args.temperature,
args.repetition_penalty,
args.max_new_tokens,
@@ -238,6 +239,9 @@ def main(args):
parser.add_argument(
"--conv-template", type=str, default=None, help="Conversation prompt template."
)
+ parser.add_argument(
+ "--conv-system-msg", type=str, default=None, help="Conversation system message."
+ )
parser.add_argument("--temperature", type=float, default=0.7)
parser.add_argument("--repetition_penalty", type=float, default=1.0)
parser.add_argument("--max-new-tokens", type=int, default=512)
diff --git a/fastchat/serve/inference.py b/fastchat/serve/inference.py
index c4d4cea800..e963ce8672 100644
--- a/fastchat/serve/inference.py
+++ b/fastchat/serve/inference.py
@@ -1,7 +1,9 @@
"""Inference for FastChat models."""
import abc
import gc
+import json
import math
+import os
import sys
import time
from typing import Iterable, Optional, Dict
@@ -284,6 +286,7 @@ def chat_loop(
load_8bit: bool,
cpu_offloading: bool,
conv_template: Optional[str],
+ conv_system_msg: Optional[str],
temperature: float,
repetition_penalty: float,
max_new_tokens: int,
@@ -327,6 +330,8 @@ def new_chat():
conv = get_conv_template(conv_template)
else:
conv = get_conversation_template(model_path)
+ if conv_system_msg is not None:
+ conv.set_system_message(conv_system_msg)
return conv
conv = None
@@ -343,12 +348,10 @@ def new_chat():
if inp == "!!exit" or not inp:
print("exit...")
break
-
elif inp == "!!reset":
print("resetting...")
conv = new_chat()
continue
-
elif inp.startswith("!!save"):
args = inp.split(" ", 1)
@@ -362,14 +365,9 @@ def new_chat():
filename += ".json"
print("saving...", filename)
-
- import json
-
- with open(filename, "w") as file:
- json.dump(conv.dict(), file)
-
+ with open(filename, "w") as outfile:
+ json.dump(conv.dict(), outfile)
continue
-
elif inp.startswith("!!load"):
args = inp.split(" ", 1)
@@ -379,8 +377,6 @@ def new_chat():
else:
filename = args[1]
- import os
-
# Check if file exists and add .json if needed
if not os.path.exists(filename):
if (not filename.endswith(".json")) and os.path.exists(
@@ -392,22 +388,15 @@ def new_chat():
continue
print("loading...", filename)
-
- import json
-
- with open(filename, "r") as file:
- new_conv = json.load(file)
+ with open(filename, "r") as infile:
+ new_conv = json.load(infile)
conv = get_conv_template(new_conv["template_name"])
- conv.system = new_conv["system"]
- conv.roles = new_conv["roles"]
+ conv.set_system_message(new_conv["system_message"])
conv.messages = new_conv["messages"]
- conv.offset = new_conv["offset"]
-
for message in conv.messages[conv.offset :]:
chatio.prompt_for_output(message[0])
chatio.print_output(message[1])
-
continue
conv.append_message(conv.roles[0], inp)
| <!-- Thank you for your contribution! -->
<!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. -->
## Why are these changes needed?
1. Debugging dialogue performance under different character settings
<!-- Please give a short summary of the change and the problem this solves. -->
## Related issue number (if applicable)
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've run `format.sh` to lint the changes in this PR.
- [ ] I've included any doc changes needed.
- [ ] I've made sure the relevant tests are passing (if applicable).
| https://api.github.com/repos/lm-sys/FastChat/pulls/2098 | 2023-07-27T16:05:51Z | 2023-08-02T01:41:15Z | 2023-08-02T01:41:15Z | 2023-08-02T03:00:12Z | 1,010 | lm-sys/FastChat | 41,614 |
Fix Past CI | diff --git a/.github/workflows/self-past.yml b/.github/workflows/self-past.yml
index 6a154544df8b9..2ece4388d27c9 100644
--- a/.github/workflows/self-past.yml
+++ b/.github/workflows/self-past.yml
@@ -88,6 +88,10 @@ jobs:
working-directory: /transformers
run: python3 -m pip uninstall -y transformers && python3 -m pip install -e .
+ - name: Update some packages
+ working-directory: /transformers
+ run: python3 -m pip install -U datasets
+
- name: Echo folder ${{ matrix.folders }}
shell: bash
# For folders like `models/bert`, set an env. var. (`matrix_folders`) to `models_bert`, which will be used to
@@ -164,6 +168,10 @@ jobs:
working-directory: /transformers
run: python3 -m pip uninstall -y transformers && python3 -m pip install -e .
+ - name: Update some packages
+ working-directory: /transformers
+ run: python3 -m pip install -U datasets
+
- name: Echo folder ${{ matrix.folders }}
shell: bash
# For folders like `models/bert`, set an env. var. (`matrix_folders`) to `models_bert`, which will be used to
@@ -240,6 +248,10 @@ jobs:
working-directory: /transformers
run: python3 -m pip uninstall -y transformers && python3 -m pip install -e .
+ - name: Update some packages
+ working-directory: /transformers
+ run: python3 -m pip install -U datasets
+
- name: Install
working-directory: /transformers
run: |
| # What does this PR do?
Since mid-November, we have hundreds of following failures in each past CI
> (line 1108) NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported.
It is caused by `fsspec==2023.10.0` with an old `datasets`.
This PR just updates `datasets` (at CI runtime) to avoid, and fix thousand failures in total in past CI 🚀 🤣 | https://api.github.com/repos/huggingface/transformers/pulls/27696 | 2023-11-24T17:10:34Z | 2023-11-27T08:11:59Z | 2023-11-27T08:11:59Z | 2023-11-27T08:12:00Z | 412 | huggingface/transformers | 12,554 |
add: | diff --git a/metagpt/tools/moderation.py b/metagpt/tools/moderation.py
new file mode 100644
index 000000000..c56a6afc4
--- /dev/null
+++ b/metagpt/tools/moderation.py
@@ -0,0 +1,40 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+"""
+@Time : 2023/9/26 14:27
+@Author : zhanglei
+@File : moderation.py
+"""
+from typing import Union
+
+from metagpt.llm import LLM
+
+
+class Moderation:
+ def __init__(self):
+ self.llm = LLM()
+
+ def moderation(self, content: Union[str, list[str]]):
+ resp = []
+ if content:
+ moderation_results = self.llm.moderation(content=content)
+ results = moderation_results.results
+ for item in results:
+ resp.append(item.flagged)
+
+ return resp
+
+ async def amoderation(self, content: Union[str, list[str]]):
+ resp = []
+ if content:
+ moderation_results = await self.llm.amoderation(content=content)
+ results = moderation_results.results
+ for item in results:
+ resp.append(item.flagged)
+
+ return resp
+
+
+if __name__ == "__main__":
+ moderation = Moderation()
+ print(moderation.moderation(content=["I will kill you", "The weather is really nice today", "I want to hit you"]))
diff --git a/tests/metagpt/tools/test_moderation.py b/tests/metagpt/tools/test_moderation.py
new file mode 100644
index 000000000..225acff75
--- /dev/null
+++ b/tests/metagpt/tools/test_moderation.py
@@ -0,0 +1,42 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+"""
+@Time : 2023/9/26 14:46
+@Author : zhanglei
+@File : test_translate.py
+"""
+
+import pytest
+
+from metagpt.tools.moderation import Moderation
+
+
+@pytest.mark.parametrize(
+ ("content",),
+ [
+ [
+ ["I will kill you", "The weather is really nice today", "I want to hit you"],
+ ]
+ ],
+)
+def test_moderation(content):
+ moderation = Moderation()
+ results = moderation.moderation(content=content)
+ assert isinstance(results, list)
+ assert len(results) == len(content)
+
+
+@pytest.mark.asyncio
+@pytest.mark.parametrize(
+ ("content",),
+ [
+ [
+ ["I will kill you", "The weather is really nice today", "I want to hit you"],
+ ]
+ ],
+)
+async def test_amoderation(content):
+ moderation = Moderation()
+ results = await moderation.amoderation(content=content)
+ assert isinstance(results, list)
+ assert len(results) == len(content)
| 1.moderation tools:Determine whether the content is compliant,return True if not compliant,otherwise False
2.unittest
| https://api.github.com/repos/geekan/MetaGPT/pulls/399 | 2023-10-07T08:59:48Z | 2023-10-07T09:32:30Z | 2023-10-07T09:32:30Z | 2023-10-07T09:32:30Z | 700 | geekan/MetaGPT | 16,867 |
AutoResetWrapper integration with gym.make() | diff --git a/gym/envs/registration.py b/gym/envs/registration.py
index f53ac621205..70f0b9345e5 100644
--- a/gym/envs/registration.py
+++ b/gym/envs/registration.py
@@ -90,6 +90,7 @@ class EnvSpec:
nondeterministic: Whether this environment is non-deterministic even after seeding
max_episode_steps: The maximum number of steps that an episode can consist of
order_enforce: Whether to wrap the environment in an orderEnforcing wrapper
+ autoreset: Whether the environment should automatically reset when it reaches the done state
kwargs: The kwargs to pass to the environment class
"""
@@ -100,6 +101,7 @@ class EnvSpec:
nondeterministic: bool = field(default=False)
max_episode_steps: Optional[int] = field(default=None)
order_enforce: bool = field(default=True)
+ autoreset: bool = field(default=False)
kwargs: dict = field(default_factory=dict)
namespace: Optional[str] = field(init=False)
name: str = field(init=False)
@@ -132,6 +134,10 @@ def make(self, **kwargs) -> Env:
_kwargs = self.kwargs.copy()
_kwargs.update(kwargs)
+ if "autoreset" in _kwargs:
+ self.autoreset = _kwargs["autoreset"]
+ del _kwargs["autoreset"]
+
if callable(self.entry_point):
env = self.entry_point(**_kwargs)
else:
@@ -142,15 +148,23 @@ def make(self, **kwargs) -> Env:
spec = copy.deepcopy(self)
spec.kwargs = _kwargs
env.unwrapped.spec = spec
+
if self.order_enforce:
from gym.wrappers.order_enforcing import OrderEnforcing
env = OrderEnforcing(env)
+
assert env.spec is not None, "expected spec to be set to the unwrapped env."
if env.spec.max_episode_steps is not None:
from gym.wrappers.time_limit import TimeLimit
env = TimeLimit(env, max_episode_steps=env.spec.max_episode_steps)
+
+ if self.autoreset:
+ from gym.wrappers.autoreset import AutoResetWrapper
+
+ env = AutoResetWrapper(env)
+
return env
diff --git a/tests/wrappers/test_autoreset.py b/tests/wrappers/test_autoreset.py
index f003a7280df..827f08f4b9a 100644
--- a/tests/wrappers/test_autoreset.py
+++ b/tests/wrappers/test_autoreset.py
@@ -1,10 +1,13 @@
+import types
from typing import Optional
+from unittest.mock import MagicMock
import numpy as np
import pytest
import gym
from gym.wrappers import AutoResetWrapper
+from tests.envs.spec_list import spec_list
class DummyResetEnv(gym.Env):
@@ -62,6 +65,49 @@ def test_autoreset_reset_info():
assert isinstance(info, dict)
+@pytest.mark.parametrize("spec", spec_list)
+def test_make_autoreset_true(spec):
+ """
+ Note: This test assumes that the outermost wrapper is AutoResetWrapper
+ so if that is being changed in the future, this test will break and need
+ to be updated.
+ Note: This test assumes that all first-party environments will terminate in a finite
+ amount of time with random actions, which is true as of the time of adding this test.
+ """
+ env = None
+ with pytest.warns(None) as warnings:
+ env = spec.make(autoreset=True)
+
+ ob_space = env.observation_space
+ obs = env.reset(seed=0)
+ env.action_space.seed(0)
+
+ env.unwrapped.reset = MagicMock(side_effect=env.unwrapped.reset)
+
+ done = False
+ while not done:
+ obs, reward, done, info = env.step(env.action_space.sample())
+
+ assert isinstance(env, AutoResetWrapper)
+ assert env.unwrapped.reset.called
+
+
+@pytest.mark.parametrize("spec", spec_list)
+def test_make_autoreset_false(spec):
+ env = None
+ with pytest.warns(None) as warnings:
+ env = spec.make(autoreset=False)
+ assert not isinstance(env, AutoResetWrapper)
+
+
+@pytest.mark.parametrize("spec", spec_list)
+def test_make_autoreset_default_false(spec):
+ env = None
+ with pytest.warns(None) as warnings:
+ env = spec.make()
+ assert not isinstance(env, AutoResetWrapper)
+
+
def test_autoreset_autoreset():
env = DummyResetEnv()
env = AutoResetWrapper(env)
| # Description
This is a pull request to integrate the autoreset wrapper into gym.make() with it being disabled by default. The keyword argument is autoreset. If autoreset is set to False or a value is not provided for autoreset in gym.make, then the env will not be wrapped with the autoreset wrapper.
# Notes for Code Reviewers
Something to note that might be relevant for the code review is that currently the only test is for cartpole because it is guaranteed to terminate in a relatively short number of time-steps. I'm open to suggestions for how to improve test coverage.
Also it might be worth checking if you agree with the way I drop the autoreset key word from _kwargs in registration.py before passing _kwargs to the entrypoint function. I'm not sure if there was a better way, but it seemed correct because we are not adding auto-reset as a keyword argument to the actual environment constructors(so far at least).
Fixes https://github.com/openai/gym/issues/2564
## Type of change
Please delete options that are not relevant.
- [x] This change requires a documentation update
# Checklist:
- [x] I have run the [`pre-commit` checks](https://pre-commit.com/) with `pre-commit run --all-files` (see `CONTRIBUTING.md` instructions to set it up)
- [ ] I have commented my code, particularly in hard-to-understand areas
- [x] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [x] I have added tests that prove my fix is effective or that my feature works
- [x] New and existing unit tests pass locally with my changes
| https://api.github.com/repos/openai/gym/pulls/2728 | 2022-04-03T01:27:28Z | 2022-04-08T14:54:50Z | 2022-04-08T14:54:49Z | 2022-04-12T21:46:56Z | 1,031 | openai/gym | 5,390 |
Add test for S3 list_object_versions with EncodingType parameter | diff --git a/tests/integration/test_s3.py b/tests/integration/test_s3.py
index faecbc4c5df40..61879a83fed58 100644
--- a/tests/integration/test_s3.py
+++ b/tests/integration/test_s3.py
@@ -772,6 +772,21 @@ def test_etag_on_get_object_call(self):
# clean up
self._delete_bucket(TEST_BUCKET_NAME_2, [TEST_KEY_2])
+ def test_get_object_versioning(self):
+ bucket_name = 'bucket-%s' % short_uid()
+
+ self.s3_client.create_bucket(Bucket=bucket_name)
+ rs = self.s3_client.list_object_versions(
+ Bucket=bucket_name,
+ EncodingType='url'
+ )
+
+ self.assertEqual(rs['ResponseMetadata']['HTTPStatusCode'], 200)
+ self.assertEqual(rs['Name'], bucket_name)
+
+ # clean up
+ self._delete_bucket(bucket_name, [])
+
# ---------------
# HELPER METHODS
# ---------------
| Add test for S3 list_object_versions with EncodingType parameter - addresses #452 | https://api.github.com/repos/localstack/localstack/pulls/2201 | 2020-03-26T15:24:04Z | 2020-03-26T18:25:37Z | 2020-03-26T18:25:37Z | 2020-03-26T18:25:53Z | 231 | localstack/localstack | 28,904 |
changed prints to logging in utils/datasets | diff --git a/utils/datasets.py b/utils/datasets.py
index 841879a5cf8..5f1ff5f1a7f 100755
--- a/utils/datasets.py
+++ b/utils/datasets.py
@@ -1,6 +1,7 @@
# Dataset utils and dataloaders
import glob
+import logging
import math
import os
import random
@@ -21,6 +22,8 @@
from utils.general import xyxy2xywh, xywh2xyxy
from utils.torch_utils import torch_distributed_zero_first
+logger = logging.getLogger(__name__)
+
# Parameters
help_url = 'https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data'
img_formats = ['.bmp', '.jpg', '.jpeg', '.png', '.tif', '.tiff', '.dng']
@@ -165,14 +168,14 @@ def __next__(self):
ret_val, img0 = self.cap.read()
self.frame += 1
- print('video %g/%g (%g/%g) %s: ' % (self.count + 1, self.nf, self.frame, self.nframes, path), end='')
+ logger.debug('video %g/%g (%g/%g) %s: ', self.count + 1, self.nf, self.frame, self.nframes, path)
else:
# Read image
self.count += 1
img0 = cv2.imread(path) # BGR
assert img0 is not None, 'Image Not Found ' + path
- print('image %g/%g %s: ' % (self.count, self.nf, path), end='')
+ logger.debug('image %g/%g %s: ', self.count, self.nf, path)
# Padded resize
img = letterbox(img0, new_shape=self.img_size)[0]
@@ -234,7 +237,7 @@ def __next__(self):
# Print
assert ret_val, 'Camera Error %s' % self.pipe
img_path = 'webcam.jpg'
- print('webcam %g: ' % self.count, end='')
+ logger.debug('webcam %g: ', self.count)
# Padded resize
img = letterbox(img0, new_shape=self.img_size)[0]
@@ -265,7 +268,7 @@ def __init__(self, sources='streams.txt', img_size=640):
self.sources = sources
for i, s in enumerate(sources):
# Start the thread to read frames from the video stream
- print('%g/%g: %s... ' % (i + 1, n, s), end='')
+ logger.debug('%g/%g: %s... ', i + 1, n, s)
cap = cv2.VideoCapture(eval(s) if s.isnumeric() else s)
assert cap.isOpened(), 'Failed to open %s' % s
w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
@@ -273,15 +276,14 @@ def __init__(self, sources='streams.txt', img_size=640):
fps = cap.get(cv2.CAP_PROP_FPS) % 100
_, self.imgs[i] = cap.read() # guarantee first frame
thread = Thread(target=self.update, args=([i, cap]), daemon=True)
- print(' success (%gx%g at %.2f FPS).' % (w, h, fps))
+ logger.debug(' success (%gx%g at %.2f FPS).', w, h, fps)
thread.start()
- print('') # newline
# check for common shapes
s = np.stack([letterbox(x, new_shape=self.img_size)[0].shape for x in self.imgs], 0) # inference shapes
self.rect = np.unique(s, axis=0).shape[0] == 1 # rect inference if all shapes equal
if not self.rect:
- print('WARNING: Different stream shapes detected. For optimal performance supply similarly-shaped streams.')
+ logger.warning('WARNING: Different stream shapes detected. For optimal performance supply similarly-shaped streams.')
def update(self, index, cap):
# Read next stream frame in a daemon thread
@@ -419,7 +421,7 @@ def img2label_paths(img_paths):
assert (l >= 0).all(), 'negative labels: %s' % file
assert (l[:, 1:] <= 1).all(), 'non-normalized or out of bounds coordinate labels: %s' % file
if np.unique(l, axis=0).shape[0] < l.shape[0]: # duplicate rows
- nd += 1 # print('WARNING: duplicate rows in %s' % self.label_files[i]) # duplicate rows
+ nd += 1 # logger.warning('WARNING: duplicate rows in %s', self.label_files[i]) # duplicate rows
if single_cls:
l[:, 0] = 0 # force dataset into single-class mode
self.labels[i] = l
@@ -456,7 +458,7 @@ def img2label_paths(img_paths):
b[[1, 3]] = np.clip(b[[1, 3]], 0, h)
assert cv2.imwrite(f, img[b[1]:b[3], b[0]:b[2]]), 'Failure extracting classifier boxes'
else:
- ne += 1 # print('empty labels for image %s' % self.img_files[i]) # file empty
+ ne += 1 # logger.info('empty labels for image %s', self.img_files[i]) # file empty
# os.system("rm '%s' '%s'" % (self.img_files[i], self.label_files[i])) # remove
if rank in [-1, 0]:
@@ -464,7 +466,7 @@ def img2label_paths(img_paths):
cache_path, nf, nm, ne, nd, n)
if nf == 0:
s = 'WARNING: No labels found in %s. See %s' % (os.path.dirname(file) + os.sep, help_url)
- print(s)
+ logger.info(s)
assert not augment, '%s. Can not train without labels.' % s
# Cache images into memory for faster training (WARNING: large datasets may exceed system RAM)
@@ -497,7 +499,7 @@ def cache_labels(self, path='labels.cache'):
l = np.zeros((0, 5), dtype=np.float32)
x[img] = [l, shape]
except Exception as e:
- print('WARNING: Ignoring corrupted image and/or label %s: %s' % (img, e))
+ logger.warning('WARNING: Ignoring corrupted image and/or label %s: %s', img, e)
x['hash'] = get_hash(self.label_files + self.img_files)
torch.save(x, path) # save for next time
@@ -508,7 +510,7 @@ def __len__(self):
# def __iter__(self):
# self.count = -1
- # print('ran dataset iter')
+ # logger.info('ran dataset iter')
# #self.shuffled_vector = np.random.permutation(self.nF) if self.augment else np.arange(self.nF)
# return self
| Changed prints to logging in utils/datasets.
I've only changed these ones since that's a part that causes output problems in our tool. I didn't want to change more than that without being able to easily test, sorry.
Usage of info vs debug messages was based purely on common sense
## 🛠️ PR Summary
<sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub>
### 🌟 Summary
Refactor print statements to logging in dataset utils.
### 📊 Key Changes
- Replaced `print` statements with `logger` calls to handle messaging in `datasets.py`.
- Imported `logging` to use logging functionality.
- Added a warning log for different stream shapes in video sources.
- Changed info and debug messages to integrate with the logger.
### 🎯 Purpose & Impact
- 💡 **Purpose:** The changes introduce a more standard and configurable way to handle output messages within the code, moving from simple print statements to a logging framework.
- 🌍 **Impact to users:**
- Enhanced ability to control the verbosity of messages by setting different logging levels (e.g., DEBUG, INFO, WARNING).
- Improved clarity of output messages when running with different execution environments, such as in production or during debugging.
- Easier integration with monitoring tools and log aggregators that support standard logging protocols. | https://api.github.com/repos/ultralytics/yolov5/pulls/1315 | 2020-11-06T17:20:09Z | 2020-11-24T15:03:19Z | 2020-11-24T15:03:19Z | 2024-01-19T20:35:03Z | 1,628 | ultralytics/yolov5 | 25,409 |
Restart: only do restart if running via the wrapper script | diff --git a/modules/restart.py b/modules/restart.py
new file mode 100644
index 00000000000..18eacaf377e
--- /dev/null
+++ b/modules/restart.py
@@ -0,0 +1,23 @@
+import os
+from pathlib import Path
+
+from modules.paths_internal import script_path
+
+
+def is_restartable() -> bool:
+ """
+ Return True if the webui is restartable (i.e. there is something watching to restart it with)
+ """
+ return bool(os.environ.get('SD_WEBUI_RESTART'))
+
+
+def restart_program() -> None:
+ """creates file tmp/restart and immediately stops the process, which webui.bat/webui.sh interpret as a command to start webui again"""
+
+ (Path(script_path) / "tmp" / "restart").touch()
+
+ stop_program()
+
+
+def stop_program() -> None:
+ os._exit(0)
diff --git a/modules/shared.py b/modules/shared.py
index 2bd7c6ec493..c9ee2dd1351 100644
--- a/modules/shared.py
+++ b/modules/shared.py
@@ -853,12 +853,3 @@ def walk_files(path, allowed_extensions=None):
continue
yield os.path.join(root, filename)
-
-
-def restart_program():
- """creates file tmp/restart and immediately stops the process, which webui.bat/webui.sh interpret as a command to start webui again"""
-
- with open(os.path.join(script_path, "tmp", "restart"), "w"):
- pass
-
- os._exit(0)
diff --git a/modules/ui_extensions.py b/modules/ui_extensions.py
index 5580dfafe5f..3d216912d70 100644
--- a/modules/ui_extensions.py
+++ b/modules/ui_extensions.py
@@ -11,7 +11,7 @@
import shutil
import errno
-from modules import extensions, shared, paths, config_states, errors
+from modules import extensions, shared, paths, config_states, errors, restart
from modules.paths_internal import config_states_dir
from modules.call_queue import wrap_gradio_gpu_call
@@ -49,7 +49,11 @@ def apply_and_restart(disable_list, update_list, disable_all):
shared.opts.disabled_extensions = disabled
shared.opts.disable_all_extensions = disable_all
shared.opts.save(shared.config_filename)
- shared.restart_program()
+
+ if restart.is_restartable():
+ restart.restart_program()
+ else:
+ restart.stop_program()
def save_config_state(name):
@@ -508,7 +512,8 @@ def create_ui():
with gr.TabItem("Installed", id="installed"):
with gr.Row(elem_id="extensions_installed_top"):
- apply = gr.Button(value="Apply and restart UI", variant="primary")
+ apply_label = ("Apply and restart UI" if restart.is_restartable() else "Apply and quit")
+ apply = gr.Button(value=apply_label, variant="primary")
check = gr.Button(value="Check for updates")
extensions_disable_all = gr.Radio(label="Disable all extensions", choices=["none", "extra", "all"], value=shared.opts.disable_all_extensions, elem_id="extensions_disable_all")
extensions_disabled_list = gr.Text(elem_id="extensions_disabled_list", visible=False).style(container=False)
diff --git a/webui.bat b/webui.bat
index 961fc7d4c34..42e7d517d18 100644
--- a/webui.bat
+++ b/webui.bat
@@ -3,7 +3,7 @@
if not defined PYTHON (set PYTHON=python)
if not defined VENV_DIR (set "VENV_DIR=%~dp0%venv")
-
+set SD_WEBUI_RESTART=tmp/restart
set ERROR_REPORTING=FALSE
mkdir tmp 2>NUL
diff --git a/webui.sh b/webui.sh
index c407b3efe7c..6c48e9699a9 100755
--- a/webui.sh
+++ b/webui.sh
@@ -204,6 +204,7 @@ prepare_tcmalloc() {
}
KEEP_GOING=1
+export SD_WEBUI_RESTART=tmp/restart
while [[ "$KEEP_GOING" -eq "1" ]]; do
if [[ ! -z "${ACCELERATE}" ]] && [ ${ACCELERATE}="True" ] && [ -x "$(command -v accelerate)" ]; then
printf "\n%s\n" "${delimiter}"
|
## Checklist:
- [x] I have read [contributing wiki page](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
- [x] I have performed a self-review of my own code
- [x] My code follows the [style guidelines](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing#code-style)
- [ ] My code passes [tests](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Tests)
| https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/11043 | 2023-06-05T17:00:53Z | 2023-06-05T17:06:40Z | 2023-06-05T17:06:40Z | 2023-06-27T09:11:29Z | 986 | AUTOMATIC1111/stable-diffusion-webui | 40,400 |
VW: add missing Arteon 2019 FW | diff --git a/selfdrive/car/volkswagen/values.py b/selfdrive/car/volkswagen/values.py
index e63975b0c03134..bdd82088a1fee2 100755
--- a/selfdrive/car/volkswagen/values.py
+++ b/selfdrive/car/volkswagen/values.py
@@ -293,6 +293,7 @@ def init_make(self, CP: car.CarParams):
b'\xf1\x873G0906259N \xf1\x890004',
b'\xf1\x873G0906259P \xf1\x890001',
b'\xf1\x875NA907115H \xf1\x890002',
+ b'\xf1\x873G0906259G \xf1\x890004',
],
(Ecu.transmission, 0x7e1, None): [
b'\xf1\x8709G927158L \xf1\x893611',
@@ -304,17 +305,19 @@ def init_make(self, CP: car.CarParams):
b'\xf1\x873Q0959655BK\xf1\x890703\xf1\x82\x0e1616001613121157161111572900',
b'\xf1\x873Q0959655BK\xf1\x890703\xf1\x82\x0e1616001613121177161113772900',
b'\xf1\x873Q0959655DL\xf1\x890732\xf1\x82\0161812141812171105141123052J00',
+ b'\xf1\x873Q0959655CK\xf1\x890711\xf1\x82\x0e1712141712141105121122052900',
],
(Ecu.eps, 0x712, None): [
b'\xf1\x873Q0909144K \xf1\x895072\xf1\x82\x0571B41815A1',
b'\xf1\x873Q0909144L \xf1\x895081\xf1\x82\x0571B00817A1',
- b'\xf1\x875Q0910143C \xf1\x892211\xf1\x82\00567B0020800',
+ b'\xf1\x875Q0910143C \xf1\x892211\xf1\x82\x0567B0020800',
b'\xf1\x875WA907145M \xf1\x891051\xf1\x82\x002MB4092M7N',
],
(Ecu.fwdRadar, 0x757, None): [
b'\xf1\x872Q0907572AA\xf1\x890396',
b'\xf1\x872Q0907572T \xf1\x890383',
b'\xf1\x875Q0907572J \xf1\x890654',
+ b'\xf1\x875Q0907572R \xf1\x890771',
],
},
CAR.ATLAS_MK1: {
| https://api.github.com/repos/commaai/openpilot/pulls/27671 | 2023-03-24T08:06:06Z | 2023-03-25T02:11:51Z | 2023-03-25T02:11:51Z | 2023-03-25T02:11:52Z | 642 | commaai/openpilot | 9,857 |
|
Don't memoize config._server_headless | diff --git a/lib/streamlit/config.py b/lib/streamlit/config.py
index c14e35cfb5db..f00a406de351 100644
--- a/lib/streamlit/config.py
+++ b/lib/streamlit/config.py
@@ -450,22 +450,24 @@ def _server_cookie_secret():
@_create_option("server.headless", type_=bool)
-@util.memoize
def _server_headless():
"""If false, will attempt to open a browser window on start.
Default: false unless (1) we are on a Linux box where DISPLAY is unset, or
(2) server.liveSave is set.
"""
- is_live_save_on = get_option("server.liveSave")
- is_linux = env_util.IS_LINUX_OR_BSD
- has_display_env = not os.getenv("DISPLAY")
- is_running_in_editor_plugin = (
- os.getenv("IS_RUNNING_IN_STREAMLIT_EDITOR_PLUGIN") is not None
- )
- return (
- is_live_save_on or (is_linux and has_display_env) or is_running_in_editor_plugin
- )
+ if get_option("server.liveSave"):
+ return True
+
+ if env_util.IS_LINUX_OR_BSD and not os.getenv("DISPLAY"):
+ # We're running in Linux and DISPLAY is unset
+ return True
+
+ if os.getenv("IS_RUNNING_IN_STREAMLIT_EDITOR_PLUGIN") is not None:
+ # We're running within the Streamlit Atom plugin
+ return True
+
+ return False
@_create_option("server.liveSave", type_=bool, visibility="hidden")
| When we memoize any computed config option, we introduce global state that doesn't get reset across tests. This was triggering test failures in ConfigTest.py.
(This also reorganizes the `config_test. _server_headless` function for clarity.) | https://api.github.com/repos/streamlit/streamlit/pulls/2858 | 2021-02-25T02:14:55Z | 2021-02-25T02:24:02Z | 2021-02-25T02:24:02Z | 2021-07-24T00:37:08Z | 353 | streamlit/streamlit | 22,165 |
Actually disable docstring prefix normalization with -S + fix instability | diff --git a/CHANGES.md b/CHANGES.md
index 09954f2b73..3a4d8fdf17 100644
--- a/CHANGES.md
+++ b/CHANGES.md
@@ -19,6 +19,8 @@
- Single-character closing docstring quotes are no longer moved to their own line as
this is invalid. This was a bug introduced in version 22.6.0. (#3166)
+- `--skip-string-normalization` / `-S` now prevents docstring prefixes from being
+ normalized as expected (#3168)
### _Blackd_
diff --git a/src/black/linegen.py b/src/black/linegen.py
index 20f3ac6fff..1f132b7888 100644
--- a/src/black/linegen.py
+++ b/src/black/linegen.py
@@ -293,7 +293,24 @@ def visit_STRING(self, leaf: Leaf) -> Iterator[Line]:
if is_docstring(leaf) and "\\\n" not in leaf.value:
# We're ignoring docstrings with backslash newline escapes because changing
# indentation of those changes the AST representation of the code.
- docstring = normalize_string_prefix(leaf.value)
+ if Preview.normalize_docstring_quotes_and_prefixes_properly in self.mode:
+ # There was a bug where --skip-string-normalization wouldn't stop us
+ # from normalizing docstring prefixes. To maintain stability, we can
+ # only address this buggy behaviour while the preview style is enabled.
+ if self.mode.string_normalization:
+ docstring = normalize_string_prefix(leaf.value)
+ # visit_default() does handle string normalization for us, but
+ # since this method acts differently depending on quote style (ex.
+ # see padding logic below), there's a possibility for unstable
+ # formatting as visit_default() is called *after*. To avoid a
+ # situation where this function formats a docstring differently on
+ # the second pass, normalize it early.
+ docstring = normalize_string_quotes(docstring)
+ else:
+ docstring = leaf.value
+ else:
+ # ... otherwise, we'll keep the buggy behaviour >.<
+ docstring = normalize_string_prefix(leaf.value)
prefix = get_string_prefix(docstring)
docstring = docstring[len(prefix) :] # Remove the prefix
quote_char = docstring[0]
diff --git a/src/black/mode.py b/src/black/mode.py
index 896c516df7..b7359fab21 100644
--- a/src/black/mode.py
+++ b/src/black/mode.py
@@ -145,12 +145,13 @@ def supports_feature(target_versions: Set[TargetVersion], feature: Feature) -> b
class Preview(Enum):
"""Individual preview style features."""
- string_processing = auto()
- remove_redundant_parens = auto()
- one_element_subscript = auto()
annotation_parens = auto()
long_docstring_quotes_on_newline = auto()
+ normalize_docstring_quotes_and_prefixes_properly = auto()
+ one_element_subscript = auto()
remove_block_trailing_newline = auto()
+ remove_redundant_parens = auto()
+ string_processing = auto()
class Deprecated(UserWarning):
diff --git a/tests/data/miscellaneous/docstring_preview_no_string_normalization.py b/tests/data/miscellaneous/docstring_preview_no_string_normalization.py
new file mode 100644
index 0000000000..0957231eb9
--- /dev/null
+++ b/tests/data/miscellaneous/docstring_preview_no_string_normalization.py
@@ -0,0 +1,10 @@
+def do_not_touch_this_prefix():
+ R"""There was a bug where docstring prefixes would be normalized even with -S."""
+
+
+def do_not_touch_this_prefix2():
+ F'There was a bug where docstring prefixes would be normalized even with -S.'
+
+
+def do_not_touch_this_prefix3():
+ uR'''There was a bug where docstring prefixes would be normalized even with -S.'''
diff --git a/tests/data/simple_cases/docstring.py b/tests/data/simple_cases/docstring.py
index 7153be468c..f08bba575f 100644
--- a/tests/data/simple_cases/docstring.py
+++ b/tests/data/simple_cases/docstring.py
@@ -209,6 +209,13 @@ def multiline_docstring_at_line_limit():
second line----------------------------------------------------------------------"""
+def stable_quote_normalization_with_immediate_inner_single_quote(self):
+ ''''<text here>
+
+ <text here, since without another non-empty line black is stable>
+ '''
+
+
# output
class MyClass:
@@ -417,3 +424,10 @@ def multiline_docstring_at_line_limit():
"""first line-----------------------------------------------------------------------
second line----------------------------------------------------------------------"""
+
+
+def stable_quote_normalization_with_immediate_inner_single_quote(self):
+ """'<text here>
+
+ <text here, since without another non-empty line black is stable>
+ """
diff --git a/tests/test_format.py b/tests/test_format.py
index 0e1059c61e..86339f24b8 100644
--- a/tests/test_format.py
+++ b/tests/test_format.py
@@ -139,6 +139,18 @@ def test_docstring_no_string_normalization() -> None:
assert_format(source, expected, mode)
+def test_preview_docstring_no_string_normalization() -> None:
+ """
+ Like test_docstring but with string normalization off *and* the preview style
+ enabled.
+ """
+ source, expected = read_data(
+ "miscellaneous", "docstring_preview_no_string_normalization"
+ )
+ mode = replace(DEFAULT_MODE, string_normalization=False, preview=True)
+ assert_format(source, expected, mode)
+
+
def test_long_strings_flag_disabled() -> None:
"""Tests for turning off the string processing logic."""
source, expected = read_data("miscellaneous", "long_strings_flag_disabled")
| ### Description
The former was a regression I introduced a long time ago. To avoid changing the stable style too much, the regression is only fixed if `--preview` is enabled.
*annoying enough, as we currently always enforce a second format pass if changes were made, there's no good way to prove the existence of the docstring quote normalization instability issue. For posterity, here's one failing example:
```diff
--- source
+++ first pass
@@ -1,7 +1,7 @@
def some_function(self):
- ''''<text here>
+ """ '<text here>
<text here, since without another non-empty line black is stable>
- '''
+ """
pass
--- first pass
+++ second pass
@@ -1,7 +1,7 @@
def some_function(self):
- """ '<text here>
+ """'<text here>
<text here, since without another non-empty line black is stable>
"""
pass
```
### Checklist - did you ...
- [x] Add a CHANGELOG entry if necessary?
- [x] Add / update tests if necessary?
- [x] Add new / update outdated documentation?
<!-- Just as a reminder, everyone in all psf/black spaces including PRs
must follow the PSF Code of Conduct (link below).
Finally, once again thanks for your time and effort. If you have any
feedback in regards to your experience contributing here, please
let us know!
Helpful links:
PSF COC: https://www.python.org/psf/conduct/
Contributing docs: https://black.readthedocs.io/en/latest/contributing/index.html
Chat on Python Discord: https://discord.gg/RtVdv86PrH -->
| https://api.github.com/repos/psf/black/pulls/3168 | 2022-07-14T17:55:27Z | 2022-07-14T23:47:34Z | 2022-07-14T23:47:33Z | 2022-07-15T00:01:57Z | 1,325 | psf/black | 24,218 |
DOC: Fix typo | diff --git a/tools/pinning/current/pyproject.toml b/tools/pinning/current/pyproject.toml
index 5802f87f187..b5959bc201d 100644
--- a/tools/pinning/current/pyproject.toml
+++ b/tools/pinning/current/pyproject.toml
@@ -76,7 +76,7 @@ setuptools-rust = "*"
pylint = ">2.6.2"
# Bug in poetry, where still installes yanked versions from pypi (source: https://github.com/python-poetry/poetry/issues/2453)
-# this version of cryptography intreduced a security vulnrability.
+# this version of cryptography introduced a security vulnrability.
# Making sure that it would not get installed (Fixing https://github.com/certbot/certbot/issues/9336)
cryptography = "!= 37.0.3"
| ## Pull Request Checklist
- [x] If the change being made is to a [distributed component](https://certbot.eff.org/docs/contributing.html#code-components-and-layout), edit the `master` section of `certbot/CHANGELOG.md` to include a description of the change being made.
- [x] Add or update any documentation as needed to support the changes in this PR.
| https://api.github.com/repos/certbot/certbot/pulls/9346 | 2022-07-09T11:48:53Z | 2022-07-11T18:30:50Z | 2022-07-11T18:30:50Z | 2022-07-11T18:30:50Z | 189 | certbot/certbot | 1,148 |
Enable pip cache and set Python version | diff --git a/.travis.yml b/.travis.yml
index d9b4cb5eade..67da27d00be 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -1,5 +1,9 @@
language: python
+cache:
+ directories:
+ - $HOME/.cache/pip
+
services:
- rabbitmq
- mariadb
@@ -19,23 +23,30 @@ env:
global:
- GOPATH=/tmp/go
- PATH=$GOPATH/bin:$PATH
- matrix:
- - TOXENV=py26 BOULDER_INTEGRATION=1
- - TOXENV=py27 BOULDER_INTEGRATION=1
- - TOXENV=py26-oldest BOULDER_INTEGRATION=1
- - TOXENV=py27-oldest BOULDER_INTEGRATION=1
- - TOXENV=py33
- - TOXENV=py34
- - TOXENV=lint
- - TOXENV=cover
-# Disabled for now due to requiring sudo -> causing more boulder integration
-# DNS timeouts :(
-# - TOXENV=apacheconftest
matrix:
include:
- - env: TOXENV=py35
- python: 3.5
-
+ - python: "2.6"
+ env: TOXENV=py26 BOULDER_INTEGRATION=1
+ - python: "2.6"
+ env: TOXENV=py26-oldest BOULDER_INTEGRATION=1
+# Disabled for now due to requiring sudo -> causing more boulder integration
+# DNS timeouts :(
+# - python: "2.7"
+# env: TOXENV=apacheconftest
+ - python: "2.7"
+ env: TOXENV=py27 BOULDER_INTEGRATION=1
+ - python: "2.7"
+ env: TOXENV=py27-oldest BOULDER_INTEGRATION=1
+ - python: "2.7"
+ env: TOXENV=cover
+ - python: "2.7"
+ env: TOXENV=lint
+ - python: "3.3"
+ env: TOXENV=py33
+ - python: "3.4"
+ env: TOXENV=py34
+ - python: "3.5"
+ env: TOXENV=py35
# Only build pushes to the master branch, PRs, and branches beginning with
# `test-`. This reduces the number of simultaneous Travis runs, which speeds
@@ -57,7 +68,6 @@ addons:
sources:
- augeas
packages: # keep in sync with bootstrap/ubuntu.sh and Boulder
- - python
- python-dev
- python-virtualenv
- gcc
| Builds off #2160 (which builds off #2030). My theory for why this didn't work before is Travis didn't know Python version our tests were running, causing it to use the wrong `pip` cache.
Building off of #2160 should fix this. I will run Travis tests multiple times and verify the cache is used in each test to prevent the problems that occurred before.
| https://api.github.com/repos/certbot/certbot/pulls/2238 | 2016-01-19T23:52:11Z | 2016-01-20T01:43:17Z | 2016-01-20T01:43:17Z | 2016-05-06T19:22:31Z | 667 | certbot/certbot | 2,591 |
Reduce update_interval for Opengarage | diff --git a/homeassistant/components/opengarage/__init__.py b/homeassistant/components/opengarage/__init__.py
index 76ffcc42bd16..eb1b50db5b69 100644
--- a/homeassistant/components/opengarage/__init__.py
+++ b/homeassistant/components/opengarage/__init__.py
@@ -66,7 +66,7 @@ def __init__(
hass,
_LOGGER,
name=DOMAIN,
- update_interval=timedelta(seconds=30),
+ update_interval=timedelta(seconds=5),
)
async def _async_update_data(self) -> None:
| <!--
You are amazing! Thanks for contributing to our project!
Please, DO NOT DELETE ANY TEXT from this template! (unless instructed).
-->
## Breaking change
<!--
If your PR contains a breaking change for existing users, it is important
to tell them what breaks, how to make it work again and why we did this.
This piece of text is published with the release notes, so it helps if you
write it towards our users, not us.
Note: Remove this section if this PR is NOT a breaking change.
-->
## Proposed change
<!--
Describe the big picture of your changes here to communicate to the
maintainers why we should accept this pull request. If it fixes a bug
or resolves a feature request, be sure to link to that issue in the
additional information section.
-->
Reduce update_interval for Opengarage
## Type of change
<!--
What type of change does your PR introduce to Home Assistant?
NOTE: Please, check only 1! box!
If your PR requires multiple boxes to be checked, you'll most likely need to
split it into multiple PRs. This makes things easier and faster to code review.
-->
- [ ] Dependency upgrade
- [x] Bugfix (non-breaking change which fixes an issue)
- [ ] New integration (thank you!)
- [ ] New feature (which adds functionality to an existing integration)
- [ ] Breaking change (fix/feature causing existing functionality to break)
- [ ] Code quality improvements to existing code or addition of tests
## Additional information
<!--
Details are important, and help maintainers processing your PR.
Please be sure to fill out additional details, if applicable.
-->
- This PR fixes or closes issue: fixes #64096
- This PR is related to issue:
- Link to documentation pull request:
## Checklist
<!--
Put an `x` in the boxes that apply. You can also fill these out after
creating the PR. If you're unsure about any of them, don't hesitate to ask.
We're here to help! This is simply a reminder of what we are going to look
for before merging your code.
-->
- [x] The code change is tested and works locally.
- [x] Local tests pass. **Your PR cannot be merged unless tests pass**
- [x] There is no commented out code in this PR.
- [x] I have followed the [development checklist][dev-checklist]
- [x] The code has been formatted using Black (`black --fast homeassistant tests`)
- [ ] Tests have been added to verify that the new code works.
If user exposed functionality or configuration variables are added/changed:
- [ ] Documentation added/updated for [www.home-assistant.io][docs-repository]
If the code communicates with devices, web services, or third-party tools:
- [ ] The [manifest file][manifest-docs] has all fields filled out correctly.
Updated and included derived files by running: `python3 -m script.hassfest`.
- [ ] New or updated dependencies have been added to `requirements_all.txt`.
Updated by running `python3 -m script.gen_requirements_all`.
- [ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description.
- [ ] Untested files have been added to `.coveragerc`.
The integration reached or maintains the following [Integration Quality Scale][quality-scale]:
<!--
The Integration Quality Scale scores an integration on the code quality
and user experience. Each level of the quality scale consists of a list
of requirements. We highly recommend getting your integration scored!
-->
- [ ] No score or internal
- [ ] 🥈 Silver
- [ ] 🥇 Gold
- [ ] 🏆 Platinum
<!--
This project is very active and we have a high turnover of pull requests.
Unfortunately, the number of incoming pull requests is higher than what our
reviewers can review and merge so there is a long backlog of pull requests
waiting for review. You can help here!
By reviewing another pull request, you will help raise the code quality of
that pull request and the final review will be faster. This way the general
pace of pull request reviews will go up and your wait time will go down.
When picking a pull request to review, try to choose one that hasn't yet
been reviewed.
Thanks for helping out!
-->
To help with the load of incoming pull requests:
- [ ] I have reviewed two other [open pull requests][prs] in this repository.
[prs]: https://github.com/home-assistant/core/pulls?q=is%3Aopen+is%3Apr+-author%3A%40me+-draft%3Atrue+-label%3Awaiting-for-upstream+sort%3Acreated-desc+review%3Anone+-status%3Afailure
<!--
Thank you for contributing <3
Below, some useful links you could explore:
-->
[dev-checklist]: https://developers.home-assistant.io/docs/en/development_checklist.html
[manifest-docs]: https://developers.home-assistant.io/docs/en/creating_integration_manifest.html
[quality-scale]: https://developers.home-assistant.io/docs/en/next/integration_quality_scale_index.html
[docs-repository]: https://github.com/home-assistant/home-assistant.io
| https://api.github.com/repos/home-assistant/core/pulls/66478 | 2022-02-13T20:36:50Z | 2022-02-13T21:41:47Z | 2022-02-13T21:41:46Z | 2022-02-14T22:02:11Z | 141 | home-assistant/core | 39,095 |
Added alternative implementation for Iterator pattern using the Iterator protocol | diff --git a/README.md b/README.md
index 77526b42..49ad4d4a 100644
--- a/README.md
+++ b/README.md
@@ -42,6 +42,7 @@ __Behavioral Patterns__:
| [chaining_method](patterns/behavioral/chaining_method.py) | continue callback next object method |
| [command](patterns/behavioral/command.py) | bundle a command and arguments to call later |
| [iterator](patterns/behavioral/iterator.py) | traverse a container and access the container's elements |
+| [iterator](patterns/behavioral/iterator_alt.py) (alt. impl.)| traverse a container and access the container's elements |
| [mediator](patterns/behavioral/mediator.py) | an object that knows how to connect other objects and act as a proxy |
| [memento](patterns/behavioral/memento.py) | generate an opaque token that can be used to go back to a previous state |
| [observer](patterns/behavioral/observer.py) | provide a callback for notification of events/changes to data |
diff --git a/patterns/behavioral/iterator_alt.py b/patterns/behavioral/iterator_alt.py
new file mode 100644
index 00000000..afc23a03
--- /dev/null
+++ b/patterns/behavioral/iterator_alt.py
@@ -0,0 +1,58 @@
+"""
+Implementation of the iterator pattern using the iterator protocol from Python
+
+*TL;DR
+Traverses a container and accesses the container's elements.
+"""
+
+
+class NumberWords:
+ """Counts by word numbers, up to a maximum of five"""
+ _WORD_MAP = (
+ 'one',
+ 'two',
+ 'three',
+ 'four',
+ 'five',
+ )
+
+ def __init__(self, start, stop):
+ self.start = start
+ self.stop = stop
+
+ def __iter__(self): # this makes the class an Iterable
+ return self
+
+ def __next__(self): # this makes the class an Iterator
+ if self.start > self.stop or self.start > len(self._WORD_MAP):
+ raise StopIteration
+ current = self.start
+ self.start += 1
+ return self._WORD_MAP[current - 1]
+
+
+# Test the iterator
+
+def main():
+ """
+ # Counting to two...
+ >>> for number in NumberWords(start=1, stop=2):
+ ... print(number)
+ one
+ two
+
+ # Counting to five...
+ >>> for number in NumberWords(start=1, stop=5):
+ ... print(number)
+ one
+ two
+ three
+ four
+ five
+ """
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
| In this PR I add the alternative version on how to implement the Iterator pattern using the Iterator protocol from Python. I followed the same example in the original version so that they can be easily compared. | https://api.github.com/repos/faif/python-patterns/pulls/320 | 2020-02-11T22:41:13Z | 2020-02-12T21:05:56Z | 2020-02-12T21:05:56Z | 2020-02-12T21:05:56Z | 658 | faif/python-patterns | 33,673 |
Document that we're not accepting DNS plugins | diff --git a/certbot/docs/contributing.rst b/certbot/docs/contributing.rst
index da0ddc9d16b..0807cbf2444 100644
--- a/certbot/docs/contributing.rst
+++ b/certbot/docs/contributing.rst
@@ -300,6 +300,16 @@ configuration checkpoints and rollback.
Writing your own plugin
~~~~~~~~~~~~~~~~~~~~~~~
+.. note:: The Certbot team is not currently accepting any new DNS plugins
+ because we want to rethink our approach to the challenge and resolve some
+ issues like `#6464 <https://github.com/certbot/certbot/issues/6464>`_,
+ `#6503 <https://github.com/certbot/certbot/issues/6503>`_, and `#6504
+ <https://github.com/certbot/certbot/issues/6504>`_ first.
+
+ In the meantime, you're welcome to release it as a third-party plugin. See
+ `certbot-dns-ispconfig <https://github.com/m42e/certbot-dns-ispconfig>`_
+ for one example of that.
+
Certbot client supports dynamic discovery of plugins through the
`setuptools entry points`_ using the `certbot.plugins` group. This
way you can, for example, create a custom implementation of
| Docs now look like:
![Screen Shot 2019-12-11 at 11 55 26 AM](https://user-images.githubusercontent.com/6504915/70655459-263e6f80-1c0d-11ea-87f2-cb8b12010589.png)
| https://api.github.com/repos/certbot/certbot/pulls/7639 | 2019-12-11T19:55:55Z | 2019-12-18T19:13:57Z | 2019-12-18T19:13:57Z | 2019-12-18T19:14:06Z | 303 | certbot/certbot | 3,079 |
bpo-37610: improve Using Python doc wrt Editors & IDE | diff --git a/Doc/using/editors.rst b/Doc/using/editors.rst
new file mode 100644
index 00000000000000..f36f570125c119
--- /dev/null
+++ b/Doc/using/editors.rst
@@ -0,0 +1,14 @@
+.. highlight:: none
+
+.. _editors:
+
+******************
+ Editors and IDEs
+******************
+
+There are a number of IDEs that support Python programming language.
+Many editors and IDEs provide syntax highlighting, debugging tools, and :pep:`8` checks.
+
+Please go to `Python Editors <https://wiki.python.org/moin/PythonEditors>`_ and
+`Integrated Development Environments <https://wiki.python.org/moin/IntegratedDevelopmentEnvironments>`_
+for a comprehensive list.
diff --git a/Doc/using/index.rst b/Doc/using/index.rst
index 4f0aa7d9577df6..4a45121ac2eebd 100644
--- a/Doc/using/index.rst
+++ b/Doc/using/index.rst
@@ -17,3 +17,4 @@ interpreter and things that make working with Python easier.
unix.rst
windows.rst
mac.rst
+ editors.rst
diff --git a/Doc/using/unix.rst b/Doc/using/unix.rst
index 021f0d35a8eea3..c0a5643fc2c242 100644
--- a/Doc/using/unix.rst
+++ b/Doc/using/unix.rst
@@ -134,14 +134,3 @@ some Unices may not have the :program:`env` command, so you may need to hardcode
``/usr/bin/python3`` as the interpreter path.
To use shell commands in your Python scripts, look at the :mod:`subprocess` module.
-
-
-Editors and IDEs
-================
-
-There are a number of IDEs that support Python programming language.
-Many editors and IDEs provide syntax highlighting, debugging tools, and :pep:`8` checks.
-
-Please go to `Python Editors <https://wiki.python.org/moin/PythonEditors>`_ and
-`Integrated Development Environments <https://wiki.python.org/moin/IntegratedDevelopmentEnvironments>`_
-for a comprehensive list.
| Move the Editors and IDE section out of the Unix section, to its own section.
<!-- issue-number: [bpo-37610](https://bugs.python.org/issue37610) -->
https://bugs.python.org/issue37610
<!-- /issue-number -->
Automerge-Triggered-By: @Mariatta | https://api.github.com/repos/python/cpython/pulls/14850 | 2019-07-19T01:12:25Z | 2019-07-19T01:23:17Z | 2019-07-19T01:23:17Z | 2019-07-19T01:25:07Z | 522 | python/cpython | 4,088 |
单词拼写错误 | diff --git a/tools/infer/predict_system.py b/tools/infer/predict_system.py
index 8d674809a5..16789b81cd 100755
--- a/tools/infer/predict_system.py
+++ b/tools/infer/predict_system.py
@@ -92,11 +92,11 @@ def __call__(self, img, cls=True):
self.draw_crop_rec_res(self.args.crop_res_save_dir, img_crop_list,
rec_res)
filter_boxes, filter_rec_res = [], []
- for box, rec_reuslt in zip(dt_boxes, rec_res):
- text, score = rec_reuslt
+ for box, rec_result in zip(dt_boxes, rec_res):
+ text, score = rec_result
if score >= self.drop_score:
filter_boxes.append(box)
- filter_rec_res.append(rec_reuslt)
+ filter_rec_res.append(rec_result)
return filter_boxes, filter_rec_res
diff --git a/tools/infer_cls.py b/tools/infer_cls.py
index 7522e43907..ab6a49120b 100755
--- a/tools/infer_cls.py
+++ b/tools/infer_cls.py
@@ -73,8 +73,8 @@ def main():
images = paddle.to_tensor(images)
preds = model(images)
post_result = post_process_class(preds)
- for rec_reuslt in post_result:
- logger.info('\t result: {}'.format(rec_reuslt))
+ for rec_result in post_result:
+ logger.info('\t result: {}'.format(rec_result))
logger.info("success!")
| result被写成reuslt | https://api.github.com/repos/PaddlePaddle/PaddleOCR/pulls/5345 | 2022-01-25T06:40:04Z | 2022-01-25T08:35:35Z | 2022-01-25T08:35:35Z | 2022-01-25T08:35:35Z | 349 | PaddlePaddle/PaddleOCR | 42,338 |
🌐 Add Turkish translation for `docs/tr/docs/tutorial/query-params.md` | diff --git a/docs/tr/docs/tutorial/query-params.md b/docs/tr/docs/tutorial/query-params.md
new file mode 100644
index 0000000000000..61232d5b38008
--- /dev/null
+++ b/docs/tr/docs/tutorial/query-params.md
@@ -0,0 +1,227 @@
+# Sorgu Parametreleri
+
+Fonksiyonda yol parametrelerinin parçası olmayan diğer tanımlamalar otomatik olarak "sorgu" parametresi olarak yorumlanır.
+
+```Python hl_lines="9"
+{!../../../docs_src/query_params/tutorial001.py!}
+```
+
+Sorgu, bağlantıdaki `?` kısmından sonra gelen ve `&` işareti ile ayrılan anahtar-değer çiftlerinin oluşturduğu bir kümedir.
+
+Örneğin, aşağıdaki bağlantıda:
+
+```
+http://127.0.0.1:8000/items/?skip=0&limit=10
+```
+
+...sorgu parametreleri şunlardır:
+
+* `skip`: değeri `0`'dır
+* `limit`: değeri `10`'dır
+
+Parametreler bağlantının bir parçası oldukları için doğal olarak string olarak değerlendirilirler.
+
+Fakat, Python tipleri ile tanımlandıkları zaman (yukarıdaki örnekte `int` oldukları gibi), parametreler o tiplere dönüştürülür ve o tipler çerçevesinde doğrulanırlar.
+
+Yol parametreleri için geçerli olan her türlü işlem aynı şekilde sorgu parametreleri için de geçerlidir:
+
+* Editör desteği (şüphesiz)
+* Veri "<abbr title="HTTP isteği ile birlikte gelen string'i Python verisine dönüştürme">ayrıştırma</abbr>"
+* Veri doğrulama
+* Otomatik dokümantasyon
+
+## Varsayılanlar
+
+Sorgu parametreleri, adres yolunun sabit bir parçası olmadıklarından dolayı isteğe bağlı ve varsayılan değere sahip olabilirler.
+
+Yukarıdaki örnekte `skip=0` ve `limit=10` varsayılan değere sahiplerdir.
+
+Yani, aşağıdaki bağlantıya gitmek:
+
+```
+http://127.0.0.1:8000/items/
+```
+
+şu adrese gitmek ile aynı etkiye sahiptir:
+
+```
+http://127.0.0.1:8000/items/?skip=0&limit=10
+```
+
+Ancak, mesela şöyle bir adresi ziyaret ederseniz:
+
+```
+http://127.0.0.1:8000/items/?skip=20
+```
+
+Fonksiyonunuzdaki parametre değerleri aşağıdaki gibi olacaktır:
+
+* `skip=20`: çünkü bağlantıda böyle tanımlandı.
+* `limit=10`: çünkü varsayılan değer buydu.
+
+## İsteğe Bağlı Parametreler
+
+Aynı şekilde, varsayılan değerlerini `None` olarak atayarak isteğe bağlı parametreler tanımlayabilirsiniz:
+
+=== "Python 3.10+"
+
+ ```Python hl_lines="7"
+ {!> ../../../docs_src/query_params/tutorial002_py310.py!}
+ ```
+
+=== "Python 3.8+"
+
+ ```Python hl_lines="9"
+ {!> ../../../docs_src/query_params/tutorial002.py!}
+ ```
+
+Bu durumda, `q` fonksiyon parametresi isteğe bağlı olacak ve varsayılan değer olarak `None` alacaktır.
+
+!!! check "Ek bilgi"
+ Ayrıca, dikkatinizi çekerim ki; **FastAPI**, `item_id` parametresinin bir yol parametresi olduğunu ve `q` parametresinin yol değil bir sorgu parametresi olduğunu fark edecek kadar beceriklidir.
+
+## Sorgu Parametresi Tip Dönüşümü
+
+Aşağıda görüldüğü gibi dönüştürülmek üzere `bool` tipleri de tanımlayabilirsiniz:
+
+=== "Python 3.10+"
+
+ ```Python hl_lines="7"
+ {!> ../../../docs_src/query_params/tutorial003_py310.py!}
+ ```
+
+=== "Python 3.8+"
+
+ ```Python hl_lines="9"
+ {!> ../../../docs_src/query_params/tutorial003.py!}
+ ```
+
+Bu durumda, eğer şu adrese giderseniz:
+
+```
+http://127.0.0.1:8000/items/foo?short=1
+```
+
+veya
+
+```
+http://127.0.0.1:8000/items/foo?short=True
+```
+
+veya
+
+```
+http://127.0.0.1:8000/items/foo?short=true
+```
+
+veya
+
+```
+http://127.0.0.1:8000/items/foo?short=on
+```
+
+veya
+
+```
+http://127.0.0.1:8000/items/foo?short=yes
+```
+
+veya adres, herhangi farklı bir harf varyasyonu içermesi durumuna rağmen (büyük harf, sadece baş harfi büyük kelime, vb.) fonksiyonunuz, `bool` tipli `short` parametresini `True` olarak algılayacaktır. Aksi halde `False` olarak algılanacaktır.
+
+
+## Çoklu Yol ve Sorgu Parametreleri
+
+**FastAPI** neyin ne olduğunu ayırt edebileceğinden dolayı aynı anda birden fazla yol ve sorgu parametresi tanımlayabilirsiniz.
+
+Ve parametreleri, herhangi bir sıraya koymanıza da gerek yoktur.
+
+İsimlerine göre belirleneceklerdir:
+
+=== "Python 3.10+"
+
+ ```Python hl_lines="6 8"
+ {!> ../../../docs_src/query_params/tutorial004_py310.py!}
+ ```
+
+=== "Python 3.8+"
+
+ ```Python hl_lines="8 10"
+ {!> ../../../docs_src/query_params/tutorial004.py!}
+ ```
+
+## Zorunlu Sorgu Parametreleri
+
+Türü yol olmayan bir parametre (şu ana kadar sadece sorgu parametrelerini gördük) için varsayılan değer tanımlarsanız o parametre zorunlu olmayacaktır.
+
+Parametre için belirli bir değer atamak istemeyip parametrenin sadece isteğe bağlı olmasını istiyorsanız değerini `None` olarak atayabilirsiniz.
+
+Fakat, bir sorgu parametresini zorunlu yapmak istiyorsanız varsayılan bir değer atamamanız yeterli olacaktır:
+
+```Python hl_lines="6-7"
+{!../../../docs_src/query_params/tutorial005.py!}
+```
+
+Burada `needy` parametresi `str` tipinden oluşan zorunlu bir sorgu parametresidir.
+
+Eğer tarayıcınızda şu bağlantıyı:
+
+```
+http://127.0.0.1:8000/items/foo-item
+```
+
+...`needy` parametresini eklemeden açarsanız şuna benzer bir hata ile karşılaşırsınız:
+
+```JSON
+{
+ "detail": [
+ {
+ "type": "missing",
+ "loc": [
+ "query",
+ "needy"
+ ],
+ "msg": "Field required",
+ "input": null,
+ "url": "https://errors.pydantic.dev/2.1/v/missing"
+ }
+ ]
+}
+```
+
+`needy` zorunlu bir parametre olduğundan dolayı bağlantıda tanımlanması gerekir:
+
+```
+http://127.0.0.1:8000/items/foo-item?needy=sooooneedy
+```
+
+...bu iş görür:
+
+```JSON
+{
+ "item_id": "foo-item",
+ "needy": "sooooneedy"
+}
+```
+
+Ve elbette, bazı parametreleri zorunlu, bazılarını varsayılan değerli ve bazılarını tamamen opsiyonel olarak tanımlayabilirsiniz:
+
+=== "Python 3.10+"
+
+ ```Python hl_lines="8"
+ {!> ../../../docs_src/query_params/tutorial006_py310.py!}
+ ```
+
+=== "Python 3.8+"
+
+ ```Python hl_lines="10"
+ {!> ../../../docs_src/query_params/tutorial006.py!}
+ ```
+
+Bu durumda, 3 tane sorgu parametresi var olacaktır:
+
+* `needy`, zorunlu bir `str`.
+* `skip`, varsayılan değeri `0` olan bir `int`.
+* `limit`, isteğe bağlı bir `int`.
+
+!!! tip "İpucu"
+ Ayrıca, [Yol Parametreleri](path-params.md#predefined-values){.internal-link target=_blank}nde de kullanıldığı şekilde `Enum` sınıfından faydalanabilirsiniz.
| 🌐 Add Turkish translation for `docs/tr/docs/tutorial/query-params.md`
[Original File](https://github.com/tiangolo/fastapi/blob/master/docs/en/docs/tutorial/query-params.md)
Discussion: https://github.com/tiangolo/fastapi/discussions/9193 | https://api.github.com/repos/tiangolo/fastapi/pulls/11078 | 2024-02-02T06:42:02Z | 2024-02-18T12:20:41Z | 2024-02-18T12:20:41Z | 2024-02-18T12:20:50Z | 2,071 | tiangolo/fastapi | 22,701 |
Fix Erlang heading in README.md | diff --git a/README.md b/README.md
index cfe9e6ad..b6e6c68f 100644
--- a/README.md
+++ b/README.md
@@ -170,6 +170,7 @@ For a list of free machine learning books available for download, go [here](http
* [Incanter](http://incanter.org/) - Incanter is a Clojure-based, R-like platform for statistical computing and graphics.
* [PigPen](https://github.com/Netflix/PigPen) - Map-Reduce for Clojure.
* [Envision] (https://github.com/clojurewerkz/envision) - Clojure Data Visualisation library, based on Statistiker and D3
+
<a name="erlang" />
## Erlang
| https://api.github.com/repos/josephmisiti/awesome-machine-learning/pulls/109 | 2015-01-15T09:08:52Z | 2015-01-15T15:34:24Z | 2015-01-15T15:34:24Z | 2015-01-15T15:34:24Z | 175 | josephmisiti/awesome-machine-learning | 51,895 |
|
Fix certbot-apache tests on Python 3 | diff --git a/certbot-apache/certbot_apache/obj.py b/certbot-apache/certbot_apache/obj.py
index b29b0e0ee9d..30cb24844f4 100644
--- a/certbot-apache/certbot_apache/obj.py
+++ b/certbot-apache/certbot_apache/obj.py
@@ -25,6 +25,11 @@ def __ne__(self, other):
def __repr__(self):
return "certbot_apache.obj.Addr(" + repr(self.tup) + ")"
+ def __hash__(self):
+ # Python 3 requires explicit overridden for __hash__ if __eq__ or
+ # __cmp__ is overridden. See https://bugs.python.org/issue2235
+ return super(Addr, self).__hash__()
+
def _addr_less_specific(self, addr):
"""Returns if addr.get_addr() is more specific than self.get_addr()."""
# pylint: disable=protected-access
@@ -174,6 +179,11 @@ def __eq__(self, other):
def __ne__(self, other):
return not self.__eq__(other)
+ def __hash__(self):
+ return hash((self.filep, self.path,
+ tuple(self.addrs), tuple(self.get_names()),
+ self.ssl, self.enabled, self.modmacro))
+
def conflicts(self, addrs):
"""See if vhost conflicts with any of the addrs.
diff --git a/certbot-apache/certbot_apache/parser.py b/certbot-apache/certbot_apache/parser.py
index 6bb6ff170ea..275a01e7fe4 100644
--- a/certbot-apache/certbot_apache/parser.py
+++ b/certbot-apache/certbot_apache/parser.py
@@ -1,10 +1,12 @@
"""ApacheParser is a member object of the ApacheConfigurator class."""
import fnmatch
-import itertools
import logging
import os
import re
import subprocess
+import sys
+
+import six
from certbot import errors
@@ -87,7 +89,7 @@ def init_modules(self):
while len(self.modules) != prev_size:
prev_size = len(self.modules)
- for match_name, match_filename in itertools.izip(
+ for match_name, match_filename in six.moves.zip(
iterator, iterator):
self.modules.add(self.get_arg(match_name))
self.modules.add(
@@ -460,8 +462,12 @@ def fnmatch_to_re(self, clean_fn_match): # pylint: disable=no-self-use
:rtype: str
"""
- # This strips off final /Z(?ms)
- return fnmatch.translate(clean_fn_match)[:-7]
+ if sys.version_info < (3, 6):
+ # This strips off final /Z(?ms)
+ return fnmatch.translate(clean_fn_match)[:-7]
+ else: # pragma: no cover
+ # Since Python 3.6, it returns a different pattern like (?s:.*\.load)\Z
+ return fnmatch.translate(clean_fn_match)[4:-3]
def _parse_file(self, filepath):
"""Parse file with Augeas
diff --git a/certbot-apache/certbot_apache/tests/configurator_test.py b/certbot-apache/certbot_apache/tests/configurator_test.py
index 01361c8f0ef..9376942678f 100644
--- a/certbot-apache/certbot_apache/tests/configurator_test.py
+++ b/certbot-apache/certbot_apache/tests/configurator_test.py
@@ -6,6 +6,8 @@
import unittest
import mock
+# six is used in mock.patch()
+import six # pylint: disable=unused-import
from acme import challenges
@@ -517,12 +519,12 @@ def test_prepare_server_https_named_listen(self):
# Test
self.config.prepare_server_https("8080", temp=True)
self.assertEqual(mock_add_dir.call_count, 3)
- self.assertEqual(mock_add_dir.call_args_list[0][0][2],
- ["1.2.3.4:8080", "https"])
- self.assertEqual(mock_add_dir.call_args_list[1][0][2],
- ["[::1]:8080", "https"])
- self.assertEqual(mock_add_dir.call_args_list[2][0][2],
- ["1.1.1.1:8080", "https"])
+ call_args_list = [mock_add_dir.call_args_list[i][0][2] for i in range(3)]
+ self.assertEqual(
+ sorted(call_args_list),
+ sorted([["1.2.3.4:8080", "https"],
+ ["[::1]:8080", "https"],
+ ["1.1.1.1:8080", "https"]]))
# mock_get.side_effect = ["1.2.3.4:80", "[::1]:80"]
# mock_find.return_value = ["test1", "test2", "test3"]
@@ -662,7 +664,7 @@ def test_make_vhost_ssl_bad_write(self):
# This calls open
self.config.reverter.register_file_creation = mock.Mock()
mock_open.side_effect = IOError
- with mock.patch("__builtin__.open", mock_open):
+ with mock.patch("six.moves.builtins.open", mock_open):
self.assertRaises(
errors.PluginError,
self.config.make_vhost_ssl, self.vh_truth[0])
@@ -1208,13 +1210,13 @@ def get_achalls(self):
achall1 = achallenges.KeyAuthorizationAnnotatedChallenge(
challb=acme_util.chall_to_challb(
challenges.TLSSNI01(
- token="jIq_Xy1mXGN37tb4L6Xj_es58fW571ZNyXekdZzhh7Q"),
+ token=b"jIq_Xy1mXGN37tb4L6Xj_es58fW571ZNyXekdZzhh7Q"),
"pending"),
domain="encryption-example.demo", account_key=account_key)
achall2 = achallenges.KeyAuthorizationAnnotatedChallenge(
challb=acme_util.chall_to_challb(
challenges.TLSSNI01(
- token="uqnaPzxtrndteOqtrXb0Asl5gOJfWAnnx6QJyvcmlDU"),
+ token=b"uqnaPzxtrndteOqtrXb0Asl5gOJfWAnnx6QJyvcmlDU"),
"pending"),
domain="certbot.demo", account_key=account_key)
diff --git a/certbot-apache/certbot_apache/tests/display_ops_test.py b/certbot-apache/certbot_apache/tests/display_ops_test.py
index ec6eee3f2b1..f8b75022e14 100644
--- a/certbot-apache/certbot_apache/tests/display_ops_test.py
+++ b/certbot-apache/certbot_apache/tests/display_ops_test.py
@@ -38,7 +38,7 @@ def test_noninteractive(self, mock_util):
try:
self._call(self.vhosts)
except errors.MissingCommandlineFlag as e:
- self.assertTrue("vhost ambiguity" in e.message)
+ self.assertTrue("vhost ambiguity" in str(e))
@certbot_util.patch_get_utility()
def test_more_info_cancel(self, mock_util):
diff --git a/certbot-apache/certbot_apache/tests/tls_sni_01_test.py b/certbot-apache/certbot_apache/tests/tls_sni_01_test.py
index 5e369e3dbf4..62464d5d085 100644
--- a/certbot-apache/certbot_apache/tests/tls_sni_01_test.py
+++ b/certbot-apache/certbot_apache/tests/tls_sni_01_test.py
@@ -105,7 +105,7 @@ def test_mod_config(self):
for achall in self.achalls:
self.sni.add_chall(achall)
z_domain = achall.response(self.auth_key).z_domain
- z_domains.append(set([z_domain]))
+ z_domains.append(set([z_domain.decode('ascii')]))
self.sni._mod_config() # pylint: disable=protected-access
self.sni.configurator.save()
diff --git a/certbot-apache/certbot_apache/tls_sni_01.py b/certbot-apache/certbot_apache/tls_sni_01.py
index d9e29411994..65a66d2fd1d 100644
--- a/certbot-apache/certbot_apache/tls_sni_01.py
+++ b/certbot-apache/certbot_apache/tls_sni_01.py
@@ -184,7 +184,7 @@ def _get_config_text(self, achall, ip_addrs):
# https://docs.python.org/2.7/reference/lexical_analysis.html
return self.VHOST_TEMPLATE.format(
vhost=ips,
- server_name=achall.response(achall.account_key).z_domain,
+ server_name=achall.response(achall.account_key).z_domain.decode('ascii'),
ssl_options_conf_path=self.configurator.mod_ssl_conf,
cert_path=self.get_cert_path(achall),
key_path=self.get_key_path(achall),
diff --git a/tox.ini b/tox.ini
index e6317e6651e..232010d40f3 100644
--- a/tox.ini
+++ b/tox.ini
@@ -46,6 +46,8 @@ commands =
nosetests -v acme --processes=-1
pip install -e .[dev]
nosetests -v certbot --processes=-1 --process-timeout=100
+ pip install -e certbot-apache
+ nosetests -v certbot_apache --processes=-1 --process-timeout=80
[testenv:py34]
commands =
@@ -53,6 +55,8 @@ commands =
nosetests -v acme --processes=-1
pip install -e .[dev]
nosetests -v certbot --processes=-1 --process-timeout=100
+ pip install -e certbot-apache
+ nosetests -v certbot_apache --processes=-1 --process-timeout=80
[testenv:py35]
commands =
@@ -60,6 +64,8 @@ commands =
nosetests -v acme --processes=-1
pip install -e .[dev]
nosetests -v certbot --processes=-1 --process-timeout=100
+ pip install -e certbot-apache
+ nosetests -v certbot_apache --processes=-1 --process-timeout=80
[testenv:py36]
commands =
@@ -67,6 +73,8 @@ commands =
nosetests -v acme --processes=-1
pip install -e .[dev]
nosetests -v certbot --processes=-1 --process-timeout=100
+ pip install -e certbot-apache
+ nosetests -v certbot_apache --processes=-1 --process-timeout=80
[testenv:py27_install]
basepython = python2.7
| Continuation of #3375 (Enable unit tests of certbot core on Python 3); another step to #3179
See https://travis-ci.org/yan12125/certbot/builds/198516059 for my local build. Note that ```le_auto_trusty``` fails due to #4166
By the way, although all tests pass, there are quite a few warnings. Seems the test suite is not that complete to cover differences between Python 2/3.
```
$ for f in certbot-apache/certbot_apache/tests/*.py ; do ; echo $f ; python $f ; done
certbot-apache/certbot_apache/tests/augeas_configurator_test.py
.Unable to save files: . Attempted Save Notes:
........Certbot hasn't modified your configuration, so rollback isn't available.
....
----------------------------------------------------------------------
Ran 13 tests in 1.501s
OK
certbot-apache/certbot_apache/tests/complex_parsing_test.py
................
----------------------------------------------------------------------
Ran 16 tests in 0.447s
OK
certbot-apache/certbot_apache/tests/configurator_test.py
....Encountered a problem while parsing file: debian_apache_2_4/augeas_vhosts/apache2/sites-available/old,default.conf, skipping
.Encountered a problem while parsing file: debian_apache_2_4/augeas_vhosts/apache2/sites-available/old,default.conf, skipping
........No vhost exists with servername or alias of: none.com (or it's in a file with multiple vhosts, which Certbot can't parse yet). No vhost was selected. Please specify ServerName or ServerAlias in the Apache config, or split vhosts into separate files.
.The selected vhost would conflict with other HTTPS VirtualHosts within Apache. Please select another vhost or add ServerNames to your configuration.
.............Cannot find a cert or key directive in /files/tmp/tmpky8x2mxntemp/debian_apache_2_4/multiple_vhosts/apache2/sites-available/default-ssl.conf/IfModule/VirtualHost. VirtualHost was not modified
..........Failed redirect for satoshi.com
.......................Error writing/reading to file in make_vhost_ssl
.Error: should only be one vhost in /tmp/tmpd6jhkcuatemp/debian_apache_2_4/multiple_vhosts/apache2/sites-available/encryption-example.conf
.certbot-apache/certbot_apache/tests/configurator_test.py:1185: ResourceWarning: unclosed file <_io.TextIOWrapper name='/tmp/tmp3mwr1t52temp/debian_apache_2_4/multiple_vhosts/apache2/sites-available/encryption-example-le-ssl.conf' mode='r' encoding='UTF-8'>
conf_line_set = set(open(ssl_vhost.filep).read().splitlines())
.certbot-apache/certbot_apache/tests/configurator_test.py:1146: ResourceWarning: unclosed file <_io.TextIOWrapper name='/tmp/tmpsyfxgf6atemp/debian_apache_2_4/multiple_vhosts/apache2/sites-available/encryption-example-le-ssl.conf' mode='r' encoding='UTF-8'>
conf_text = open(ssl_vhost.filep).read()
.....Failed staple-ocsp for certbot.demo
..............Added an HTTP->HTTPS rewrite in addition to other RewriteRules; you may wish to check for overall consistency.
........
----------------------------------------------------------------------
Ran 90 tests in 15.000s
OK
certbot-apache/certbot_apache/tests/constants_test.py
....
----------------------------------------------------------------------
Ran 4 tests in 0.001s
OK
certbot-apache/certbot_apache/tests/display_ops_test.py
...Encountered vhost ambiguity but unable to ask for user guidance in non-interactive mode. Currently Certbot needs each vhost to be in its own conf file, and may need vhosts to be explicitly labelled with ServerName or ServerAlias directories.
...
----------------------------------------------------------------------
Ran 6 tests in 0.020s
OK
certbot-apache/certbot_apache/tests/__init__.py
certbot-apache/certbot_apache/tests/obj_test.py
/home/yen/Projects/tmp/certbot/.tox/py36/lib/python3.6/importlib/_bootstrap_external.py:426: ImportWarning: Not importing directory /home/yen/Projects/tmp/certbot/.tox/py36/lib/python3.6/site-packages/zope: missing __init__
_warnings.warn(msg.format(portions[0]), ImportWarning)
/home/yen/Projects/tmp/certbot/.tox/py36/lib/python3.6/importlib/_bootstrap_external.py:426: ImportWarning: Not importing directory /home/yen/Projects/tmp/certbot/.tox/py36/lib/python3.6/site-packages/logilab: missing __init__
_warnings.warn(msg.format(portions[0]), ImportWarning)
/home/yen/Projects/tmp/certbot/.tox/py36/lib/python3.6/distutils/__init__.py:4: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
..........
----------------------------------------------------------------------
Ran 10 tests in 0.215s
OK
certbot-apache/certbot_apache/tests/parser_test.py
.........Error running command nonexistent for runtime parameters!
.Error in checking parameter list:
.Unexpected number of equal signs in runtime config dump.
.....
----------------------------------------------------------------------
Ran 16 tests in 0.735s
OK
certbot-apache/certbot_apache/tests/tls_sni_01_test.py
.Falling back to default vhost *:443...
.....
----------------------------------------------------------------------
Ran 6 tests in 0.875s
OK
```
Is it OK to leave them until future bug reports arrive? | https://api.github.com/repos/certbot/certbot/pulls/4172 | 2017-02-05T09:54:03Z | 2017-02-25T02:21:22Z | 2017-02-25T02:21:22Z | 2017-02-25T08:24:52Z | 2,559 | certbot/certbot | 3,353 |
gpt-engineer self improvement fixes: Default to using existing file list file | diff --git a/.gitignore b/.gitignore
index 24a9cb4ee0..9753953afa 100644
--- a/.gitignore
+++ b/.gitignore
@@ -79,3 +79,4 @@ docs/db/
poetry.lock
.aider*
+.gpteng
diff --git a/gpt_engineer/file_selector.py b/gpt_engineer/file_selector.py
index f9baa2d7f4..8865f545ff 100644
--- a/gpt_engineer/file_selector.py
+++ b/gpt_engineer/file_selector.py
@@ -5,7 +5,7 @@
import tkinter.filedialog as fd
from pathlib import Path
-from typing import List, Mapping, Union
+from typing import List, Union
from gpt_engineer.db import DB, DBs
@@ -241,6 +241,13 @@ def ask_for_files(metadata_db: DB, workspace_db: DB) -> None:
Returns:
dict[str, str]: Dictionary where key = file name and value = file path
"""
+ if FILE_LIST_NAME in metadata_db:
+ print(
+ f"File list detected at {metadata_db.path / FILE_LIST_NAME}. "
+ "Edit or delete it if you want to select new files."
+ )
+ return
+
use_last_string = ""
if FILE_LIST_NAME in metadata_db:
use_last_string = (
diff --git a/gpt_engineer/steps.py b/gpt_engineer/steps.py
index c302a3b7d7..364ebcf3af 100644
--- a/gpt_engineer/steps.py
+++ b/gpt_engineer/steps.py
@@ -149,7 +149,7 @@ def respec(ai: AI, dbs: DBs) -> List[Message]:
return messages
-def gen_unit_tests(ai: AI, dbs: DBs) -> List[dict]:
+def gen_unit_tests(ai: AI, dbs: DBs) -> List[Message]:
"""
Generate unit tests based on the specification, that should work.
"""
@@ -323,10 +323,10 @@ def get_improve_prompt(ai: AI, dbs: DBs):
"-----------------------------",
"The following files will be used in the improvement process:",
f"{FILE_LIST_NAME}:",
- str(dbs.project_metadata["file_list.txt"]),
+ colored(str(dbs.project_metadata[FILE_LIST_NAME]), "green"),
"",
"The inserted prompt is the following:",
- f"'{dbs.input['prompt']}'",
+ colored(f"{dbs.input['prompt']}", "green"),
"-----------------------------",
"",
"You can change these files in your project before proceeding.",
| https://api.github.com/repos/gpt-engineer-org/gpt-engineer/pulls/734 | 2023-09-22T21:00:32Z | 2023-09-22T21:05:55Z | 2023-09-22T21:05:55Z | 2023-09-22T21:07:12Z | 588 | gpt-engineer-org/gpt-engineer | 33,148 |
|
[mgtv] add bsf:a aac_adtstoasc to ffmpeg params, fix #1458. | diff --git a/src/you_get/processor/ffmpeg.py b/src/you_get/processor/ffmpeg.py
index 1c0ba1a3de..dcc8e1c86d 100644
--- a/src/you_get/processor/ffmpeg.py
+++ b/src/you_get/processor/ffmpeg.py
@@ -125,7 +125,7 @@ def ffmpeg_concat_flv_to_mp4(files, output='output.mp4'):
params = [FFMPEG] + LOGLEVEL + ['-f', 'concat', '-safe', '-1', '-y', '-i']
params.append(output + '.txt')
- params += ['-c', 'copy', output]
+ params += ['-c', 'copy', '-bsf:a', 'aac_adtstoasc', output]
subprocess.check_call(params)
os.remove(output + '.txt')
| Add `'-bsf:a', 'aac_adtstoasc'` params to ffmpeg, solves #1458 . I have manually tested this with some other video sites which use `ffmpeg_concat_flv_to_mp4()` and it seems everything works fine, the bsf param have no side effect.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/soimort/you-get/1518)
<!-- Reviewable:end -->
| https://api.github.com/repos/soimort/you-get/pulls/1518 | 2016-11-17T03:22:26Z | 2016-11-18T21:59:55Z | 2016-11-18T21:59:55Z | 2016-11-18T22:00:49Z | 187 | soimort/you-get | 21,230 |
[requires.io] dependency update on main branch | diff --git a/tox.ini b/tox.ini
index 104f27f3f1..8a5bfe1271 100644
--- a/tox.ini
+++ b/tox.ini
@@ -35,7 +35,7 @@ deps =
types-Werkzeug==1.0.2
types-requests==2.25.0
types-cryptography==3.3.3
- types-pyOpenSSL==20.0.3
+ types-pyOpenSSL==20.0.4
commands =
mypy {posargs}
| https://api.github.com/repos/mitmproxy/mitmproxy/pulls/4697 | 2021-07-22T04:33:50Z | 2021-07-24T17:13:10Z | 2021-07-24T17:13:10Z | 2021-07-24T17:13:14Z | 129 | mitmproxy/mitmproxy | 27,467 |
|
Don't lstrip data to be parsed by parse_yaml. Fixes #6384 | diff --git a/lib/ansible/utils/__init__.py b/lib/ansible/utils/__init__.py
index 405641eb16352e..6c2f8112abae48 100644
--- a/lib/ansible/utils/__init__.py
+++ b/lib/ansible/utils/__init__.py
@@ -354,9 +354,9 @@ def smush_ds(data):
def parse_yaml(data, path_hint=None):
''' convert a yaml string to a data structure. Also supports JSON, ssssssh!!!'''
- data = data.lstrip()
+ stripped_data = data.lstrip()
loaded = None
- if data.startswith("{") or data.startswith("["):
+ if stripped_data.startswith("{") or stripped_data.startswith("["):
# since the line starts with { or [ we can infer this is a JSON document.
try:
loaded = json.loads(data)
| Only used stripped data for testing if the file is json, still use unstripped data when actually parsing. Fixes #6348
This prevents indented yaml on the first line, from being unindented and breaking YAML formatting resulting in a parser error.
| https://api.github.com/repos/ansible/ansible/pulls/6377 | 2014-03-10T18:45:36Z | 2014-03-10T18:51:14Z | 2014-03-10T18:51:14Z | 2019-04-24T20:01:29Z | 192 | ansible/ansible | 48,948 |
Lib ruamel.yaml version 0.15.55 will cause typing.Sequence[str] unsupported in option. | diff --git a/setup.py b/setup.py
index 80409c8f8b..7c6eb09f5a 100644
--- a/setup.py
+++ b/setup.py
@@ -75,7 +75,7 @@
"pyOpenSSL>=17.5,<18.1",
"pyparsing>=2.1.3, <2.3",
"pyperclip>=1.6.0, <1.7",
- "ruamel.yaml>=0.13.2, <0.15.61", # https://bitbucket.org/ruamel/yaml/issues/234/ruamelyamlsafe_load-includes-a-in-folded
+ "ruamel.yaml>=0.13.2, <0.15.55",
"sortedcontainers>=1.5.4,<2.1",
"tornado>=4.3,<5.2",
"urwid>=2.0.1,<2.1",
| Based on release note of recent version `0.15.55 ruamel.yam`l, it **unmade** CommentedSeq a subclass of list.
This will cause the sequence parsed from yaml file **cannot** be set to option whose type is `typing.Sequence[str]`. | https://api.github.com/repos/mitmproxy/mitmproxy/pulls/3297 | 2018-08-17T07:23:09Z | 2018-09-07T12:46:58Z | 2018-09-07T12:46:58Z | 2018-09-07T12:47:11Z | 214 | mitmproxy/mitmproxy | 28,096 |
Pin shortcut to Quick Start Bar and Start Menu. | diff --git a/code/default/launcher/create_shortcut.js b/code/default/launcher/create_shortcut.js
index 5f36b5fbef..e905035fad 100644
--- a/code/default/launcher/create_shortcut.js
+++ b/code/default/launcher/create_shortcut.js
@@ -1,11 +1,12 @@
function CreateShortcut() {
wsh = new ActiveXObject('WScript.Shell');
- target_path = '"' + wsh.CurrentDirectory + '\\..\\..\\..\\start.vbs"';
+ target_path = '"C:\\Windows\\System32\\wscript.exe"';
+ argument_file = '"' + wsh.CurrentDirectory + '\\..\\..\\..\\start.vbs"';
icon_path = wsh.CurrentDirectory + '\\web_ui\\favicon.ico';
link = wsh.CreateShortcut(wsh.SpecialFolders("Desktop") + '\\XX-Net.lnk');
link.TargetPath = target_path;
- link.Arguments = '';
+ link.Arguments = argument_file;
link.WindowStyle = 7;
link.IconLocation = icon_path;
link.Description = 'XX-Net';
| Solved issues: #4311 #4144 | https://api.github.com/repos/XX-net/XX-Net/pulls/10777 | 2018-05-26T06:56:02Z | 2018-05-26T07:16:43Z | 2018-05-26T07:16:43Z | 2018-06-01T11:02:08Z | 243 | XX-net/XX-Net | 17,147 |
modified code for readability and security | diff --git a/tik_tak.py b/tik_tak.py
index e4b4d28942..316b1c512f 100644
--- a/tik_tak.py
+++ b/tik_tak.py
@@ -1,45 +1,47 @@
+#Tik-tak game
-l=["anything",1,2,3,4,5,6,7,8,9]
-i=0
+
+board=["anything",1,2,3,4,5,6,7,8,9]
+switch="p1"
j=9
print("\n\t\t\tTIK-TAC-TOE")
-def board():
+def print_board():
#import os
#os.system('cls')
print("\n\n")
print(" | |" )
- print("",l[1]," | ",l[2]," | ",l[3] )
+ print("",board[1]," | ",board[2]," | ",board[3] )
print("____|_____|____")
print(" | |" )
- print("",l[4]," | ",l[5]," | ",l[6] )
+ print("",board[4]," | ",board[5]," | ",board[6] )
print("____|_____|____")
print(" | |" )
- print("",l[7]," | ",l[8]," | ",l[9] )
+ print("",board[7]," | ",board[8]," | ",board[9] )
print(" | |" )
-def enter_number(p1,p2):
- global i
+def enter_number(p1_sign,p2_sign):
+ global switch
global j
k=9
while(j):
if k==0:
break
- if i==0:
- x=int(input("\nplayer 1 :- "))
- if x<=0:
+ if switch=="p1":
+ p1_input=int(input("\nplayer 1 :- "))
+ if p1_input<=0:
print("chose number from given board")
else:
for e in range(1,10):
- if l[e]==x:
- l[e]=p1
- board()
+ if board[e]==p1_input:
+ board[e]=p1_sign
+ print_board()
c=checkwin()
if c==1:
print("\n\n Congratulation ! player 1 win ")
return
- i=1
+ switch="p2"
j-=1
k-=1
if k==0:
@@ -50,59 +52,60 @@ def enter_number(p1,p2):
break
- if i==1:
- y=int(input("\nplayer 2 :- "))
- if y<=0:
+ if switch=="p2":
+ p2_input=int(input("\nplayer 2 :- "))
+ if p2_input<=0:
print("chose number from given board")
#return
else:
for e in range(1,10):
- if l[e]==y:
- l[e]=p2
- board()
+ if board[e]==p2_input:
+ board[e]=p2_sign
+ print_board()
w=checkwin()
if w==1:
print("\n\n Congratulation ! player 2 win")
return
- i=0
+ switch="p1"
j-=1
k-=1
def checkwin():
- if l[1]==l[2]==l[3]:
+ if board[1]==board[2]==board[3]:
return 1
- elif l[4]==l[5]==l[6]:
+ elif board[4]==board[5]==board[6]:
return 1
- elif l[7]==l[8]==l[9]:
+ elif board[7]==board[8]==board[9]:
return 1
- elif l[1]==l[4]==l[7]:
+ elif board[1]==board[4]==board[7]:
return 1
- elif l[2]==l[5]==l[8]:
+ elif board[2]==board[5]==board[8]:
return 1
- elif l[3]==l[6]==l[9]:
+ elif board[3]==board[6]==board[9]:
return 1
- elif l[1]==l[5]==l[9]:
+ elif board[1]==board[5]==board[9]:
return 1
- elif l[3]==l[5]==l[7]:
+ elif board[3]==board[5]==board[7]:
return 1
else:
print("\n\nGame continue")
-def main():
- board()
- p1=input("\n\nplayer 1 chose your sign [0/x] = ")
- p2=input("player 2 chose your sign [0/x] = ")
- enter_number(p1,p2)
+def play():
+ print_board()
+ p1_sign=input("\n\nplayer 1 chose your sign [0/x] = ")
+ p2_sign=input("player 2 chose your sign [0/x] = ")
+ enter_number(p1_sign,p2_sign)
print("\n\n\t\t\tDeveloped By :- UTKARSH MATHUR")
-main()
+if __name__=="__main__":
+ play()
\ No newline at end of file
| since readability is one of the most important things when you code, I've modified the code so that others can read and understand it easily :) | https://api.github.com/repos/geekcomputers/Python/pulls/413 | 2018-10-15T14:22:21Z | 2018-11-04T21:48:49Z | 2018-11-04T21:48:49Z | 2018-11-12T06:29:14Z | 1,251 | geekcomputers/Python | 31,870 |
✏️ Fix typo on docstring in datastructures file | diff --git a/fastapi/datastructures.py b/fastapi/datastructures.py
index f22409c5175b7..6a44a7df4ff33 100644
--- a/fastapi/datastructures.py
+++ b/fastapi/datastructures.py
@@ -21,7 +21,7 @@ class DefaultPlaceholder:
You shouldn't use this class directly.
It's used internally to recognize when a default value has been overwritten, even
- if the overriden default value was truthy.
+ if the overridden default value was truthy.
"""
def __init__(self, value: Any):
@@ -42,6 +42,6 @@ def Default(value: DefaultType) -> DefaultType:
You shouldn't use this function directly.
It's used internally to recognize when a default value has been overwritten, even
- if the overriden default value was truthy.
+ if the overridden default value was truthy.
"""
return DefaultPlaceholder(value) # type: ignore
| :sweat_smile: | https://api.github.com/repos/tiangolo/fastapi/pulls/2887 | 2021-03-02T22:05:26Z | 2021-07-21T12:14:35Z | 2021-07-21T12:14:34Z | 2021-07-21T12:14:52Z | 221 | tiangolo/fastapi | 23,061 |
Add json parameter | diff --git a/requests/api.py b/requests/api.py
index 01d853d5ca..88db7dc72e 100644
--- a/requests/api.py
+++ b/requests/api.py
@@ -22,6 +22,7 @@ def request(method, url, **kwargs):
:param url: URL for the new :class:`Request` object.
:param params: (optional) Dictionary or bytes to be sent in the query string for the :class:`Request`.
:param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
+ :param json: (optional) json data to send in the body of the :class:`Request`.
:param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`.
:param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`.
:param files: (optional) Dictionary of 'name': file-like-objects (or {'name': ('filename', fileobj)}) for multipart encoding upload.
@@ -77,15 +78,16 @@ def head(url, **kwargs):
return request('head', url, **kwargs)
-def post(url, data=None, **kwargs):
+def post(url, data=None, json=None, **kwargs):
"""Sends a POST request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
+ :param json: (optional) json data to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
"""
- return request('post', url, data=data, **kwargs)
+ return request('post', url, data=data, json=json, **kwargs)
def put(url, data=None, **kwargs):
diff --git a/requests/models.py b/requests/models.py
index 03ff627adf..c1f7f561e8 100644
--- a/requests/models.py
+++ b/requests/models.py
@@ -46,6 +46,8 @@
CONTENT_CHUNK_SIZE = 10 * 1024
ITER_CHUNK_SIZE = 512
+json_dumps = json.dumps
+
class RequestEncodingMixin(object):
@property
@@ -189,7 +191,8 @@ class Request(RequestHooksMixin):
:param url: URL to send.
:param headers: dictionary of headers to send.
:param files: dictionary of {filename: fileobject} files to multipart upload.
- :param data: the body to attach the request. If a dictionary is provided, form-encoding will take place.
+ :param data: the body to attach to the request. If a dictionary is provided, form-encoding will take place.
+ :param json: json for the body to attach to the request (if data is not specified).
:param params: dictionary of URL parameters to append to the URL.
:param auth: Auth handler or (user, pass) tuple.
:param cookies: dictionary or CookieJar of cookies to attach to this request.
@@ -209,6 +212,7 @@ def __init__(self,
headers=None,
files=None,
data=None,
+ json=None,
params=None,
auth=None,
cookies=None,
@@ -230,6 +234,7 @@ def __init__(self,
self.headers = headers
self.files = files
self.data = data
+ self.json = json
self.params = params
self.auth = auth
self.cookies = cookies
@@ -246,6 +251,7 @@ def prepare(self):
headers=self.headers,
files=self.files,
data=self.data,
+ json=self.json,
params=self.params,
auth=self.auth,
cookies=self.cookies,
@@ -289,14 +295,15 @@ def __init__(self):
self.hooks = default_hooks()
def prepare(self, method=None, url=None, headers=None, files=None,
- data=None, params=None, auth=None, cookies=None, hooks=None):
+ data=None, params=None, auth=None, cookies=None, hooks=None,
+ json=None):
"""Prepares the entire request with the given parameters."""
self.prepare_method(method)
self.prepare_url(url, params)
self.prepare_headers(headers)
self.prepare_cookies(cookies)
- self.prepare_body(data, files)
+ self.prepare_body(data, files, json)
self.prepare_auth(auth, url)
# Note that prepare_auth must be last to enable authentication schemes
# such as OAuth to work on a fully prepared request.
@@ -397,7 +404,7 @@ def prepare_headers(self, headers):
else:
self.headers = CaseInsensitiveDict()
- def prepare_body(self, data, files):
+ def prepare_body(self, data, files, json=None):
"""Prepares the given HTTP body data."""
# Check if file, fo, generator, iterator.
@@ -408,6 +415,10 @@ def prepare_body(self, data, files):
content_type = None
length = None
+ if json is not None:
+ content_type = 'application/json'
+ body = json_dumps(json)
+
is_stream = all([
hasattr(data, '__iter__'),
not isinstance(data, (basestring, list, tuple, dict))
@@ -433,7 +444,7 @@ def prepare_body(self, data, files):
if files:
(body, content_type) = self._encode_files(files, data)
else:
- if data:
+ if data and json is None:
body = self._encode_params(data)
if isinstance(data, basestring) or hasattr(data, 'read'):
content_type = None
@@ -443,7 +454,7 @@ def prepare_body(self, data, files):
self.prepare_content_length(body)
# Add content-type if it wasn't explicitly provided.
- if (content_type) and (not 'content-type' in self.headers):
+ if content_type and ('content-type' not in self.headers):
self.headers['Content-Type'] = content_type
self.body = body
diff --git a/requests/sessions.py b/requests/sessions.py
index 508b0ef29a..c5ad0060ec 100644
--- a/requests/sessions.py
+++ b/requests/sessions.py
@@ -365,6 +365,7 @@ def prepare_request(self, request):
url=request.url,
files=request.files,
data=request.data,
+ json=request.json,
headers=merge_setting(request.headers, self.headers, dict_class=CaseInsensitiveDict),
params=merge_setting(request.params, self.params),
auth=merge_setting(auth, self.auth),
@@ -376,6 +377,7 @@ def prepare_request(self, request):
def request(self, method, url,
params=None,
data=None,
+ json=None,
headers=None,
cookies=None,
files=None,
@@ -396,6 +398,8 @@ def request(self, method, url,
string for the :class:`Request`.
:param data: (optional) Dictionary or bytes to send in the body of the
:class:`Request`.
+ :param json: (optional) json to send in the body of the
+ :class:`Request`.
:param headers: (optional) Dictionary of HTTP Headers to send with the
:class:`Request`.
:param cookies: (optional) Dict or CookieJar object to send with the
@@ -426,6 +430,7 @@ def request(self, method, url,
headers = headers,
files = files,
data = data or {},
+ json = json,
params = params or {},
auth = auth,
cookies = cookies,
@@ -479,15 +484,16 @@ def head(self, url, **kwargs):
kwargs.setdefault('allow_redirects', False)
return self.request('HEAD', url, **kwargs)
- def post(self, url, data=None, **kwargs):
+ def post(self, url, data=None, json=None, **kwargs):
"""Sends a POST request. Returns :class:`Response` object.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`.
+ :param json: (optional) json to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
"""
- return self.request('POST', url, data=data, **kwargs)
+ return self.request('POST', url, data=data, json=json, **kwargs)
def put(self, url, data=None, **kwargs):
"""Sends a PUT request. Returns :class:`Response` object.
diff --git a/test_requests.py b/test_requests.py
index 716c0dcff6..294ba684f0 100755
--- a/test_requests.py
+++ b/test_requests.py
@@ -986,6 +986,15 @@ def test_requests_history_is_saved(self):
assert item.history == total[0:i]
i=i+1
+ def test_json_param_post_content_type_works(self):
+ r = requests.post(
+ httpbin('post'),
+ json={'life': 42}
+ )
+ assert r.status_code == 200
+ assert 'application/json' in r.request.headers['Content-Type']
+ assert {'life': 42} == r.json()['json']
+
class TestContentEncodingDetection(unittest.TestCase):
| Closes #2025
| https://api.github.com/repos/psf/requests/pulls/2258 | 2014-09-30T15:59:29Z | 2014-10-05T16:46:09Z | 2014-10-05T16:46:09Z | 2021-09-07T00:06:40Z | 2,135 | psf/requests | 32,957 |
torture test | diff --git a/tests/data/torture.py b/tests/data/torture.py
new file mode 100644
index 00000000000..79a44c2e34c
--- /dev/null
+++ b/tests/data/torture.py
@@ -0,0 +1,81 @@
+importA;() << 0 ** 101234234242352525425252352352525234890264906820496920680926538059059209922523523525 #
+
+assert sort_by_dependency(
+ {
+ "1": {"2", "3"}, "2": {"2a", "2b"}, "3": {"3a", "3b"},
+ "2a": set(), "2b": set(), "3a": set(), "3b": set()
+ }
+) == ["2a", "2b", "2", "3a", "3b", "3", "1"]
+
+importA
+0;0^0#
+
+class A:
+ def foo(self):
+ for _ in range(10):
+ aaaaaaaaaaaaaaaaaaa = bbbbbbbbbbbbbbb.cccccccccc( # pylint: disable=no-member
+ xxxxxxxxxxxx
+ )
+
+def test(self, othr):
+ return (1 == 2 and
+ (name, description, self.default, self.selected, self.auto_generated, self.parameters, self.meta_data, self.schedule) ==
+ (name, description, othr.default, othr.selected, othr.auto_generated, othr.parameters, othr.meta_data, othr.schedule))
+
+# output
+
+importA
+(
+ ()
+ << 0
+ ** 101234234242352525425252352352525234890264906820496920680926538059059209922523523525
+) #
+
+assert (
+ sort_by_dependency(
+ {
+ "1": {"2", "3"},
+ "2": {"2a", "2b"},
+ "3": {"3a", "3b"},
+ "2a": set(),
+ "2b": set(),
+ "3a": set(),
+ "3b": set(),
+ }
+ )
+ == ["2a", "2b", "2", "3a", "3b", "3", "1"]
+)
+
+importA
+0
+0 ^ 0 #
+
+
+class A:
+ def foo(self):
+ for _ in range(10):
+ aaaaaaaaaaaaaaaaaaa = bbbbbbbbbbbbbbb.cccccccccc(
+ xxxxxxxxxxxx
+ ) # pylint: disable=no-member
+
+
+def test(self, othr):
+ return 1 == 2 and (
+ name,
+ description,
+ self.default,
+ self.selected,
+ self.auto_generated,
+ self.parameters,
+ self.meta_data,
+ self.schedule,
+ ) == (
+ name,
+ description,
+ othr.default,
+ othr.selected,
+ othr.auto_generated,
+ othr.parameters,
+ othr.meta_data,
+ othr.schedule,
+ )
diff --git a/tests/test_format.py b/tests/test_format.py
index 88f084ea478..5a5d06ca49b 100644
--- a/tests/test_format.py
+++ b/tests/test_format.py
@@ -52,6 +52,7 @@
"remove_parens",
"slices",
"string_prefixes",
+ "torture",
"trailing_comma_optional_parens1",
"trailing_comma_optional_parens2",
"trailing_comma_optional_parens3",
| Fixes #2651. Fixes #2754. Fixes #2518. Fixes #2321.
This adds a test that lists a number of cases of unstable formatting
that we have seen in the issue tracker. Checking it in will ensure
that we don't regress on these cases.
| https://api.github.com/repos/psf/black/pulls/2815 | 2022-01-28T05:01:13Z | 2022-01-29T00:48:39Z | 2022-01-29T00:48:39Z | 2022-01-29T00:48:43Z | 827 | psf/black | 23,834 |
[embed] correct tudou pattern | diff --git a/src/you_get/extractors/embed.py b/src/you_get/extractors/embed.py
index a177e66394..fc4015c4ee 100644
--- a/src/you_get/extractors/embed.py
+++ b/src/you_get/extractors/embed.py
@@ -25,7 +25,7 @@
"""
http://www.tudou.com/programs/view/html5embed.action?type=0&code=3LS_URGvl54&lcode=&resourceId=0_06_05_99
"""
-tudou_embed_patterns = [ 'tudou\.com[a-zA-Z0-9\/\?=\&\.\;]+code=([a-zA-Z0-9_]+)\&',
+tudou_embed_patterns = [ 'tudou\.com[a-zA-Z0-9\/\?=\&\.\;]+code=([a-zA-Z0-9_-]+)\&',
'www\.tudou\.com/v/([a-zA-Z0-9_-]+)/[^"]*v\.swf'
]
| Hyphen-minus (`-`) is a valid character in Tudou's video ID. It's even present in the second pattern of `tudou_embed_patterns`, just not the first.
<!-- Reviewable:start -->
---
This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/soimort/you-get/1492)
<!-- Reviewable:end -->
| https://api.github.com/repos/soimort/you-get/pulls/1492 | 2016-11-03T00:48:09Z | 2016-11-03T16:01:00Z | 2016-11-03T16:01:00Z | 2016-11-03T16:37:32Z | 238 | soimort/you-get | 21,225 |
路径空格问题 | diff --git a/start b/start
index 724b4fa63a..b5cc37fd5a 100755
--- a/start
+++ b/start
@@ -1,7 +1,7 @@
#!/bin/bash
SCRIPTPATH="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
-cd $SCRIPTPATH
+cd "$SCRIPTPATH"
if python -V | grep -q "Python 3" ;then
PYTHON="python2"
| 可能会修复 #3594 | https://api.github.com/repos/XX-net/XX-Net/pulls/4863 | 2017-01-12T09:02:36Z | 2017-01-14T14:27:41Z | 2017-01-14T14:27:41Z | 2017-01-14T15:09:15Z | 104 | XX-net/XX-Net | 17,286 |
Added pymssql on Database Drivers | diff --git a/README.md b/README.md
index 5a643f179..b8505f552 100644
--- a/README.md
+++ b/README.md
@@ -406,6 +406,7 @@ A curated list of awesome Python frameworks, libraries and software. Inspired by
* [queries](https://github.com/gmr/queries) - A wrapper of the psycopg2 library for interacting with PostgreSQL.
* [txpostgres](http://txpostgres.readthedocs.org/) - Twisted based asynchronous driver for PostgreSQL.
* [python-sql](https://pypi.python.org/pypi/python-sql) - Write SQL queries pythonically.
+ * [pymssql](http://www.pymssql.org/) - A simple database interface to Microsoft SQL Server.
* NoSQL Databases
* [cassandra-python-driver](https://github.com/datastax/python-driver) - Python driver for Cassandra.
* [HappyBase](http://happybase.readthedocs.org/) - A developer-friendly library for Apache HBase.
| https://api.github.com/repos/vinta/awesome-python/pulls/476 | 2015-10-13T18:28:56Z | 2015-10-16T06:51:55Z | 2015-10-16T06:51:55Z | 2015-10-16T06:51:55Z | 225 | vinta/awesome-python | 26,909 |
|
ESLint only runs one on Circle CI | diff --git a/.circleci/config.yml b/.circleci/config.yml
index f4f498181b60..7c04a0094e9e 100644
--- a/.circleci/config.yml
+++ b/.circleci/config.yml
@@ -501,7 +501,7 @@ jobs:
name: Run linters
command: |
# Run eslint as a standalone command to generate the test report.
- PRE_COMMIT_NO_CONCURRENCY=true DISABLE=eslint pipenv run pre-commit run --show-diff-on-failure --color=always --all-files
+ PRE_COMMIT_NO_CONCURRENCY=true SKIP=eslint pipenv run pre-commit run --show-diff-on-failure --color=always --all-files
make jslint
- store_test_results:
|
<!--
Before contributing (PLEASE READ!)
⚠️ If your contribution is more than a few lines of code, then prior to starting to code on it please post in the issue saying you want to volunteer, then wait for a positive response. And if there is no issue for it yet, create it first.
This helps make sure:
1. Two people aren't working on the same thing
2. This is something Streamlit's maintainers believe should be implemented/fixed
3. Any API, UI, or deeper architectural changes that need to be implemented have been fully thought through by Streamlit's maintainers
4. Your time is well spent!
More information in our wiki: https://github.com/streamlit/streamlit/wiki/Contributing
-->
## 📚 Context
ESLint is run twice because an invalid variable name has been used.
<img width="1412" alt="Screenshot 2022-09-27 at 12 44 34" src="https://user-images.githubusercontent.com/78743291/192506107-a4350b47-8b76-4208-a74f-fdeedf7d7dc0.png">
- What kind of change does this PR introduce?
- [X] Bugfix
- [ ] Feature
- [ ] Refactoring
- [ ] Other, please describe:
## 🧠 Description of Changes
- _Add bullet points summarizing your changes here_
- [ ] This is a breaking API change
- [ ] This is a visible (user-facing) change
**Revised:**
_Insert screenshot of your updated UI/code here_
**Current:**
_Insert screenshot of existing UI/code here_
## 🧪 Testing Done
- [ ] Screenshots included
- [ ] Added/Updated unit tests
- [ ] Added/Updated e2e tests
## 🌐 References
_Does this depend on other work, documents, or tickets?_
- **Issue**: Closes #XXXX
---
**Contribution License Agreement**
By submitting this pull request you agree that all contributions to this project are made under the Apache 2.0 license.
| https://api.github.com/repos/streamlit/streamlit/pulls/5428 | 2022-09-27T10:47:24Z | 2022-09-28T22:25:50Z | 2022-09-28T22:25:50Z | 2023-11-02T00:04:19Z | 170 | streamlit/streamlit | 22,332 |
allow server replay functionality to run on a different port | diff --git a/mitmproxy/addons/serverplayback.py b/mitmproxy/addons/serverplayback.py
index 51ba60b4a5..0818696f8b 100644
--- a/mitmproxy/addons/serverplayback.py
+++ b/mitmproxy/addons/serverplayback.py
@@ -68,6 +68,13 @@ def load(self, loader):
to replay.
"""
)
+ loader.add_option(
+ "server_replay_ignore_port", bool, False,
+ """
+ Ignore request's destination port while searching for a saved flow
+ to replay.
+ """
+ )
@command.command("replay.server")
def load_flows(self, flows: typing.Sequence[flow.Flow]) -> None:
@@ -110,7 +117,7 @@ def _hash(self, flow):
_, _, path, _, query, _ = urllib.parse.urlparse(r.url)
queriesArray = urllib.parse.parse_qsl(query, keep_blank_values=True)
- key: typing.List[typing.Any] = [str(r.port), str(r.scheme), str(r.method), str(path)]
+ key: typing.List[typing.Any] = [str(r.scheme), str(r.method), str(path)]
if not ctx.options.server_replay_ignore_content:
if ctx.options.server_replay_ignore_payload_params and r.multipart_form:
key.extend(
@@ -129,6 +136,8 @@ def _hash(self, flow):
if not ctx.options.server_replay_ignore_host:
key.append(r.host)
+ if not ctx.options.server_replay_ignore_port:
+ key.append(r.port)
filtered = []
ignore_params = ctx.options.server_replay_ignore_params or []
| This PR takes the non-controversial changes from #3086. | https://api.github.com/repos/mitmproxy/mitmproxy/pulls/3703 | 2019-11-15T15:16:10Z | 2019-11-15T16:13:01Z | 2019-11-15T16:13:01Z | 2019-11-15T16:13:05Z | 371 | mitmproxy/mitmproxy | 27,827 |
docs: improve documentation for installation of unstable version | diff --git a/docs/README.md b/docs/README.md
index b63752a9d6..f9013b7e75 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -250,36 +250,39 @@ $ pkg upgrade www/py-httpie
### Unstable version
-You can also install the latest unreleased development version directly from the `master` branch on GitHub.
-It is a work-in-progress of a future stable release so the experience might be not as smooth.
+If you want to try out the latest version of HTTPie that hasn't been officially released yet, you can install the development or unstable version directly from the master branch on GitHub. However, keep in mind that the development version is a work in progress and may not be as reliable as the stable version.
-You can install it on Linux, macOS, Windows, or FreeBSD with `pip`:
+You can use the following command to install the development version of HTTPie on Linux, macOS, Windows, or FreeBSD operating systems. With this command, the code present in the `master` branch is downloaded and installed using `pip`.
```bash
$ python -m pip install --upgrade https://github.com/httpie/httpie/archive/master.tar.gz
```
-Or on macOS, and Linux, with Homebrew:
+There are other ways to install the development version of HTTPie on macOS and Linux.
+
+You can install it using Homebrew by running the following commands:
```bash
$ brew uninstall --force httpie
$ brew install --HEAD httpie
```
-And even on macOS, and Linux, with Snapcraft:
+You can install it using Snapcraft by running the following commands:
```bash
$ snap remove httpie
$ snap install httpie --edge
```
-Verify that now you have the [current development version identifier](https://github.com/httpie/httpie/blob/master/httpie/__init__.py#L6) with the `.dev0` suffix, for example:
+To verify the installation, you can compare the [version identifier on GitHub](https://github.com/httpie/httpie/blob/master/httpie/__init__.py#L6) with the one available on your machine. You can check the version of HTTPie on your machine by using the command `http --version`.
```bash
$ http --version
# 3.X.X.dev0
```
+Note that on your machine, the version name will have the `.dev0` suffix.
+
## Usage
Hello World:
| https://api.github.com/repos/httpie/cli/pulls/1490 | 2023-03-23T12:18:49Z | 2023-05-09T09:23:29Z | 2023-05-09T09:23:29Z | 2023-05-11T09:43:58Z | 538 | httpie/cli | 33,955 |
|
Update requirements.txt `albumentations>=1.0.2` | diff --git a/requirements.txt b/requirements.txt
index ef1736a12d5..886d21ce804 100755
--- a/requirements.txt
+++ b/requirements.txt
@@ -27,5 +27,5 @@ pandas
# extras --------------------------------------
# Cython # for pycocotools https://github.com/cocodataset/cocoapi/issues/172
# pycocotools>=2.0 # COCO mAP
-# albumentations>=1.0.0
+# albumentations>=1.0.2
thop # FLOPs computation
|
## 🛠️ PR Summary
<sub>Made with ❤️ by [Ultralytics Actions](https://github.com/ultralytics/actions)<sub>
### 🌟 Summary
Enhanced image augmentation requirements for YOLOv5.
### 📊 Key Changes
- Updated the version requirement for `albumentations` from `>=1.0.0` to `>=1.0.2`.
### 🎯 Purpose & Impact
- **Purpose**: To ensure compatibility with the latest features and fixes in the `albumentations` library.
- **Impact**: Users will benefit from improved augmentation techniques and bug fixes, potentially enhancing model accuracy and performance. This change requires users to update their `albumentations` library to at least version 1.0.2. 📈 | https://api.github.com/repos/ultralytics/yolov5/pulls/3972 | 2021-07-11T17:07:35Z | 2021-07-11T17:07:42Z | 2021-07-11T17:07:42Z | 2024-01-19T17:01:46Z | 134 | ultralytics/yolov5 | 25,588 |
Officially support using pathlib.Path for static_folder. | diff --git a/CHANGES.rst b/CHANGES.rst
index 424fe8769e..c0209cde2f 100644
--- a/CHANGES.rst
+++ b/CHANGES.rst
@@ -29,6 +29,15 @@ Unreleased
argument can be passed. :issue:`3553`
+Version 1.1.x
+-------------
+
+Not yet released.
+
+- Officially support passing a :class:`pathlib.Path` for
+ ``static_folder`` which stopped working in 1.1.2. :pr:`3579`
+
+
Version 1.1.2
-------------
diff --git a/src/flask/helpers.py b/src/flask/helpers.py
index 42c9e0dc20..786e096dc6 100644
--- a/src/flask/helpers.py
+++ b/src/flask/helpers.py
@@ -980,7 +980,7 @@ def static_folder(self):
@static_folder.setter
def static_folder(self, value):
if value is not None:
- value = value.rstrip("/\\")
+ value = os.fspath(value).rstrip(r"\/")
self._static_folder = value
@property
diff --git a/tests/test_basic.py b/tests/test_basic.py
index dd0e7b6844..02172b882d 100644
--- a/tests/test_basic.py
+++ b/tests/test_basic.py
@@ -1406,6 +1406,16 @@ def test_static_url_empty_path_default(app):
rv.close()
+@pytest.mark.skipif(sys.version_info < (3, 6), reason="requires Python >= 3.6")
+def test_static_folder_with_pathlib_path(app):
+ from pathlib import Path
+
+ app = flask.Flask(__name__, static_folder=Path("static"))
+ rv = app.test_client().open("/static/index.html", method="GET")
+ assert rv.status_code == 200
+ rv.close()
+
+
def test_static_folder_with_ending_slash():
app = flask.Flask(__name__, static_folder="static/")
| * No longer causes `AttributeError: 'PosixPath' object has no attribute 'rstrip'`.
* This was broken by e6178fe489b7828acc2bb8fd4b56a70b11ab6c6a which was released in 1.1.2.
* Add a regression test that now passes.
See #3557.
(That issue was repurposed for migrating to pathlib internally, which this PR does not resolve. Merely officially supporting Paths for `static_folder` need not require that.)
<!--
Commit checklist:
* add tests that fail without the patch
* ensure all tests pass with ``pytest``
* add documentation to the relevant docstrings or pages
* add ``versionadded`` or ``versionchanged`` directives to relevant docstrings
* add a changelog entry if this patch changes code
Tests, coverage, and docs will be run automatically when you submit the pull
request, but running them yourself can save time.
-->
| https://api.github.com/repos/pallets/flask/pulls/3579 | 2020-04-20T19:20:59Z | 2020-07-06T17:45:04Z | 2020-07-06T17:45:04Z | 2020-11-14T01:21:37Z | 463 | pallets/flask | 20,121 |
img2img batch PNG info model hash | diff --git a/modules/img2img.py b/modules/img2img.py
index 1519e132b2b..c81c7ab9e8c 100644
--- a/modules/img2img.py
+++ b/modules/img2img.py
@@ -10,6 +10,7 @@
from modules.generation_parameters_copypaste import create_override_settings_dict, parse_generation_parameters
from modules.processing import Processed, StableDiffusionProcessingImg2Img, process_images
from modules.shared import opts, state
+from modules.sd_models import get_closet_checkpoint_match
import modules.shared as shared
import modules.processing as processing
from modules.ui import plaintext_to_html
@@ -41,7 +42,8 @@ def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args, to_scale=Fal
cfg_scale = p.cfg_scale
sampler_name = p.sampler_name
steps = p.steps
-
+ override_settings = p.override_settings
+ sd_model_checkpoint_override = get_closet_checkpoint_match(override_settings.get("sd_model_checkpoint", None))
for i, image in enumerate(images):
state.job = f"{i+1} out of {len(images)}"
if state.skipped:
@@ -104,6 +106,14 @@ def process_batch(p, input_dir, output_dir, inpaint_mask_dir, args, to_scale=Fal
p.sampler_name = parsed_parameters.get("Sampler", sampler_name)
p.steps = int(parsed_parameters.get("Steps", steps))
+ model_info = get_closet_checkpoint_match(parsed_parameters.get("Model hash", None))
+ if model_info is not None:
+ p.override_settings['sd_model_checkpoint'] = model_info.name
+ elif sd_model_checkpoint_override:
+ p.override_settings['sd_model_checkpoint'] = sd_model_checkpoint_override
+ else:
+ p.override_settings.pop("sd_model_checkpoint", None)
+
proc = modules.scripts.scripts_img2img.run(p, *args)
if proc is None:
if output_dir:
diff --git a/modules/ui.py b/modules/ui.py
index 2b6a13cbb6c..9c5082c3190 100644
--- a/modules/ui.py
+++ b/modules/ui.py
@@ -614,7 +614,7 @@ def update_orig(image, state):
with gr.Accordion("PNG info", open=False):
img2img_batch_use_png_info = gr.Checkbox(label="Append png info to prompts", **shared.hide_dirs, elem_id="img2img_batch_use_png_info")
img2img_batch_png_info_dir = gr.Textbox(label="PNG info directory", **shared.hide_dirs, placeholder="Leave empty to use input directory", elem_id="img2img_batch_png_info_dir")
- img2img_batch_png_info_props = gr.CheckboxGroup(["Prompt", "Negative prompt", "Seed", "CFG scale", "Sampler", "Steps"], label="Parameters to take from png info", info="Prompts from png info will be appended to prompts set in ui.")
+ img2img_batch_png_info_props = gr.CheckboxGroup(["Prompt", "Negative prompt", "Seed", "CFG scale", "Sampler", "Steps", "Model hash"], label="Parameters to take from png info", info="Prompts from png info will be appended to prompts set in ui.")
img2img_tabs = [tab_img2img, tab_sketch, tab_inpaint, tab_inpaint_color, tab_inpaint_upload, tab_batch]
| ## Description
[[Feature Request]: Add the ability to use checkpoint metadata in Img2Img > Batch > PNG Info](https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/12757)
allow the use SD model hash from PNG info for `img2img batch` `Append png info to prompts`
## Screenshots/videos:
![image](https://github.com/AUTOMATIC1111/stable-diffusion-webui/assets/40751091/6b592abf-e912-4f92-8c9c-c609515d38bf)
## Checklist:
- [x] I have read [contributing wiki page](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
- [x] I have performed a self-review of my own code
- [x] My code follows the [style guidelines](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing#code-style)
- [x] My code passes [tests](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Tests)
| https://api.github.com/repos/AUTOMATIC1111/stable-diffusion-webui/pulls/12767 | 2023-08-25T07:01:32Z | 2023-08-25T08:41:32Z | 2023-08-25T08:41:32Z | 2023-08-25T08:41:35Z | 757 | AUTOMATIC1111/stable-diffusion-webui | 39,686 |
Fix : Error handling on sqs send empty message batch | diff --git a/localstack/services/sqs/sqs_listener.py b/localstack/services/sqs/sqs_listener.py
index c49c21a25da59..c5cf72920dada 100644
--- a/localstack/services/sqs/sqs_listener.py
+++ b/localstack/services/sqs/sqs_listener.py
@@ -14,7 +14,7 @@
from localstack.utils.analytics import event_publisher
from localstack.services.awslambda import lambda_api
from localstack.services.generic_proxy import ProxyListener
-from localstack.utils.aws.aws_responses import requests_response
+from localstack.utils.aws.aws_responses import requests_response, make_error
XMLNS_SQS = 'http://queue.amazonaws.com/doc/2012-11-05/'
@@ -196,6 +196,13 @@ def get_external_port(headers, request_handler):
return request_handler.proxy.port
+def validate_empty_message_batch(data, req_data):
+ data = to_str(data).split('Entries=')
+ if len(data) > 1 and not req_data.get('Entries'):
+ return True
+ return False
+
+
class ProxyListenerSQS(ProxyListener):
def forward_request(self, method, path, data, headers):
if method == 'OPTIONS':
@@ -274,6 +281,11 @@ def return_response(self, method, path, data, headers, response, request_handler
queue_url = re.match(r'.*<QueueUrl>(.*)</QueueUrl>', content_str, re.DOTALL).group(1)
_set_queue_attributes(queue_url, req_data)
+ elif action == 'SendMessageBatch':
+ if validate_empty_message_batch(data, req_data):
+ msg = 'There should be at least one SendMessageBatchRequestEntry in the request.'
+ return make_error(code=404, code_string='EmptyBatchRequest', message=msg)
+
# instruct listeners to fetch new SQS message
if action in ('SendMessage', 'SendMessageBatch'):
_process_sent_message(path, req_data, headers)
diff --git a/localstack/utils/aws/aws_responses.py b/localstack/utils/aws/aws_responses.py
index 62595bea1a2ca..8d26ea031cab1 100644
--- a/localstack/utils/aws/aws_responses.py
+++ b/localstack/utils/aws/aws_responses.py
@@ -6,6 +6,7 @@
from localstack.constants import TEST_AWS_ACCOUNT_ID, MOTO_ACCOUNT_ID
from localstack.utils.aws import aws_stack
from requests.models import CaseInsensitiveDict
+from localstack.utils.common import short_uid
def flask_error_response(msg, code=500, error_type='InternalFailure'):
@@ -47,6 +48,18 @@ def response_regex_replace(response, search, replace):
response.headers['Content-Length'] = str(len(response._content))
+def make_error(message, code=400, code_string='InvalidParameter'):
+ response = Response()
+ response._content = """<ErrorResponse xmlns="http://sns.amazonaws.com/doc/2010-03-31/"><Error>
+ <Type>Sender</Type>
+ <Code>{code_string}</Code>
+ <Message>{message}</Message>
+ </Error><RequestId>{req_id}</RequestId>
+ </ErrorResponse>""".format(message=message, code_string=code_string, req_id=short_uid())
+ response.status_code = code
+ return response
+
+
class LambdaResponse(object):
# this object has been created to support multi_value_headers in aws responses.
def __init__(self):
diff --git a/tests/integration/test_sqs.py b/tests/integration/test_sqs.py
index 20f4ce39a76ed..d672b10869a7d 100644
--- a/tests/integration/test_sqs.py
+++ b/tests/integration/test_sqs.py
@@ -2,7 +2,7 @@
import json
import time
import unittest
-
+from botocore.exceptions import ClientError
from localstack.utils import testutil
from localstack.utils.testutil import get_lambda_log_events, get_lambda_log_group_name
from localstack.utils.aws import aws_stack
@@ -572,3 +572,30 @@ def test_get_queue_attributes(self):
redrive_policy = json.loads(response['Attributes']['RedrivePolicy'])
self.assertEqual(redrive_policy['maxReceiveCount'], 1)
self.assertIn(redrive_policy['deadLetterTargetArn'], queue_arn1)
+
+ def test_send_message_batch_with_empty_list(self):
+ client = self.client
+ response = client.create_queue(QueueName='test-queue')
+ queue_url = response['QueueUrl']
+
+ try:
+ client.send_message_batch(QueueUrl=queue_url, Entries=[])
+ except ClientError as e:
+ self.assertEqual(e.response['Error']['Code'], 'EmptyBatchRequest')
+ self.assertEqual(e.response['ResponseMetadata']['HTTPStatusCode'], 404)
+
+ entries = [{
+ 'Id': 'message{:02d}'.format(0),
+ 'MessageBody': 'msgBody{:02d}'.format(0),
+ 'MessageAttributes': {
+ 'CustomAttribute': {
+ 'DataType': 'String',
+ 'StringValue': 'CustomAttributeValue{:02d}'.format(0)
+ }
+ }
+ }]
+
+ result = client.send_message_batch(QueueUrl=queue_url, Entries=entries)
+ self.assertEqual(result['ResponseMetadata']['HTTPStatusCode'], 200)
+ # clean up
+ client.delete_queue(QueueUrl=queue_url)
| Fix : Error handling on sqs send empty message batch #1658 | https://api.github.com/repos/localstack/localstack/pulls/2412 | 2020-05-10T21:07:13Z | 2020-05-25T22:12:28Z | 2020-05-25T22:12:28Z | 2020-05-25T22:12:29Z | 1,188 | localstack/localstack | 28,579 |
Fix FQDN checks, closes #3057 and #3056 | diff --git a/certbot/tests/cli_test.py b/certbot/tests/cli_test.py
index 9c81c070bbd..89655083778 100644
--- a/certbot/tests/cli_test.py
+++ b/certbot/tests/cli_test.py
@@ -342,11 +342,11 @@ def test_check_config_sanity_domain(self):
# FQDN
self.assertRaises(errors.ConfigurationError,
self._call,
- ['-d', 'comma,gotwrong.tld'])
+ ['-d', 'a' * 64])
# FQDN 2
self.assertRaises(errors.ConfigurationError,
self._call,
- ['-d', 'illegal.character=.tld'])
+ ['-d', (('a' * 50) + '.') * 10])
# Wildcard
self.assertRaises(errors.ConfigurationError,
self._call,
diff --git a/certbot/tests/display/ops_test.py b/certbot/tests/display/ops_test.py
index 3aff37d86f3..26f67b69f90 100644
--- a/certbot/tests/display/ops_test.py
+++ b/certbot/tests/display/ops_test.py
@@ -248,9 +248,9 @@ def test_filter_names_cancel(self, mock_util):
def test_get_valid_domains(self):
from certbot.display.ops import get_valid_domains
all_valid = ["example.com", "second.example.com",
- "also.example.com"]
- all_invalid = ["xn--ls8h.tld", "*.wildcard.com", "notFQDN",
- "uniçodé.com"]
+ "also.example.com", "under_score.example.com",
+ "justtld"]
+ all_invalid = ["xn--ls8h.tld", "*.wildcard.com", "uniçodé.com"]
two_valid = ["example.com", "xn--ls8h.tld", "also.example.com"]
self.assertEqual(get_valid_domains(all_valid), all_valid)
self.assertEqual(get_valid_domains(all_invalid), [])
@@ -276,19 +276,18 @@ def test_choose_manually(self, mock_util):
mock_util().input.return_value = (display_util.OK,
"xn--ls8h.tld")
self.assertEqual(_choose_names_manually(), [])
- # non-FQDN and no retry
- mock_util().input.return_value = (display_util.OK,
- "notFQDN")
- self.assertEqual(_choose_names_manually(), [])
- # Two valid domains
+ # Valid domains
mock_util().input.return_value = (display_util.OK,
("example.com,"
+ "under_score.example.com,"
+ "justtld,"
"valid.example.com"))
self.assertEqual(_choose_names_manually(),
- ["example.com", "valid.example.com"])
+ ["example.com", "under_score.example.com",
+ "justtld", "valid.example.com"])
# Three iterations
mock_util().input.return_value = (display_util.OK,
- "notFQDN")
+ "uniçodé.com")
yn = mock.MagicMock()
yn.side_effect = [True, True, False]
mock_util().yesno = yn
diff --git a/certbot/util.py b/certbot/util.py
index 35c599737c7..301fc669b6c 100644
--- a/certbot/util.py
+++ b/certbot/util.py
@@ -423,14 +423,17 @@ def enforce_domain_sanity(domain):
# It wasn't an IP address, so that's good
pass
- # FQDN checks from
- # http://www.mkyong.com/regular-expressions/domain-name-regular-expression-example/
- # Characters used, domain parts < 63 chars, tld > 1 < 64 chars
- # first and last char is not "-"
- fqdn = re.compile("^((?!-)[A-Za-z0-9-]{1,63}(?<!-)\\.)+[A-Za-z]{2,63}$")
- if not fqdn.match(domain):
- raise errors.ConfigurationError("Requested domain {0} is not a FQDN"
- .format(domain))
+ # FQDN checks according to RFC 2181: domain name should be less than 255
+ # octets (inclusive). And each label is 1 - 63 octets (inclusive).
+ # https://tools.ietf.org/html/rfc2181#section-11
+ msg = "Requested domain {0} is not a FQDN because ".format(domain)
+ labels = domain.split('.')
+ for l in labels:
+ if not 0 < len(l) < 64:
+ raise errors.ConfigurationError(msg + "label {0} is too long.".format(l))
+ if len(domain) > 255:
+ raise errors.ConfigurationError(msg + "it is too long.")
+
return domain
| To test this fixes #3057 try
```
# ./certbot-auto certonly --standalone --email wkoszek@mycompany.com -d wkoszek_nc2.x.mycompany.com
```
To test this fixes #3056 try
```
# ./certbot-auto certonly --standalone --email wkoszek@mycompany.com -d ai
```
These no longer raise configuration errors.
To fix this I changed the `certbot.le_util.enforce_domain_sanity` check to match what [RFC 2181](https://tools.ietf.org/html/rfc2181#section-11) specifies as a valid domain name.
| https://api.github.com/repos/certbot/certbot/pulls/3061 | 2016-05-24T20:06:53Z | 2016-06-18T02:17:09Z | 2016-06-18T02:17:09Z | 2016-10-06T01:21:36Z | 1,081 | certbot/certbot | 2,472 |
Update ddos.py | diff --git a/tools/ddos.py b/tools/ddos.py
index 2017c7d3..7cf37ea4 100644
--- a/tools/ddos.py
+++ b/tools/ddos.py
@@ -26,7 +26,7 @@ def run(self):
timer = input(" Enter Timer >> ")
os.system("cd ddos;")
subprocess.run([
- "sudo", "python3 ddos", method, url, socks_type5.4.1, threads, proxylist, multiple, timer])
+ "sudo", "python3 ddos", method, url, "socks_type5.4.1", threads, proxylist, multiple, timer])
class SlowLoris(HackingTool):
| solve issue #174 #175 | https://api.github.com/repos/Z4nzu/hackingtool/pulls/176 | 2022-01-12T03:44:18Z | 2022-06-12T19:35:58Z | 2022-06-12T19:35:58Z | 2022-06-12T19:35:58Z | 157 | Z4nzu/hackingtool | 9,918 |
add knowledge graph tutorial link | diff --git a/docs/end_to_end_tutorials/graphs.md b/docs/end_to_end_tutorials/graphs.md
index ce87989114ec0..acf49db9f42ba 100644
--- a/docs/end_to_end_tutorials/graphs.md
+++ b/docs/end_to_end_tutorials/graphs.md
@@ -1,10 +1,15 @@
-# Graphs
+# Knowledge Graphs
+
+LlamaIndex contains some fantastic guides for building with knowledge graphs.
+
+Check out the end-to-end tutorials/workshops below. Also check out our knowledge graph query engine guides [here](/core_modules/query_modules/query_engine/modules.md).
```{toctree}
---
maxdepth: 1
---
+LlamaIndex Workshop: Building RAG with Knowledge Graphs <https://colab.research.google.com/drive/1tLjOg2ZQuIClfuWrAC2LdiZHCov8oUbs>
REBEL + Knowledge Graph Index <https://colab.research.google.com/drive/1G6pcR0pXvSkdMQlAK_P-IrYgo-_staxd?usp=sharing>
```
diff --git a/docs/examples/index_structs/knowledge_graph/KnowledgeGraphDemo.ipynb b/docs/examples/index_structs/knowledge_graph/KnowledgeGraphDemo.ipynb
index 6b60e21bd40dc..64b576c0c8d04 100644
--- a/docs/examples/index_structs/knowledge_graph/KnowledgeGraphDemo.ipynb
+++ b/docs/examples/index_structs/knowledge_graph/KnowledgeGraphDemo.ipynb
@@ -6,7 +6,13 @@
"id": "82f90261",
"metadata": {},
"source": [
- "# Knowledge Graph Index"
+ "# Knowledge Graph Index\n",
+ "\n",
+ "This tutorial gives a basic overview of how to use our `KnowledgeGraphIndex`, which handles\n",
+ "automated knowledge graph construction from unstructured text as well as entity-based querying.\n",
+ "\n",
+ "If you would like to query knowledge graphs in more flexible ways, including pre-existing ones, please\n",
+ "check out our `KnowledgeGraphQueryEngine` and other constructs."
]
},
{
| https://api.github.com/repos/run-llama/llama_index/pulls/7411 | 2023-08-26T08:11:21Z | 2023-08-26T08:25:58Z | 2023-08-26T08:25:58Z | 2023-08-26T08:25:59Z | 492 | run-llama/llama_index | 6,582 |
|
Allow atomic transformation (sequence of wrapping) for vectorized environment | diff --git a/gym/vector/__init__.py b/gym/vector/__init__.py
index 89e35bf4473..764ea1978e7 100644
--- a/gym/vector/__init__.py
+++ b/gym/vector/__init__.py
@@ -1,10 +1,15 @@
+try:
+ from collections.abc import Iterable
+except ImportError:
+ Iterable = (tuple, list)
+
from gym.vector.async_vector_env import AsyncVectorEnv
from gym.vector.sync_vector_env import SyncVectorEnv
from gym.vector.vector_env import VectorEnv
__all__ = ['AsyncVectorEnv', 'SyncVectorEnv', 'VectorEnv', 'make']
-def make(id, num_envs=1, asynchronous=True, **kwargs):
+def make(id, num_envs=1, asynchronous=True, wrappers=None, **kwargs):
"""Create a vectorized environment from multiple copies of an environment,
from its id
@@ -20,6 +25,10 @@ def make(id, num_envs=1, asynchronous=True, **kwargs):
If `True`, wraps the environments in an `AsyncVectorEnv` (which uses
`multiprocessing` to run the environments in parallel). If `False`,
wraps the environments in a `SyncVectorEnv`.
+
+ wrappers : Callable or Iterable of Callables (default: `None`)
+ If not `None`, then apply the wrappers to each internal
+ environment during creation.
Returns
-------
@@ -38,6 +47,15 @@ def make(id, num_envs=1, asynchronous=True, **kwargs):
"""
from gym.envs import make as make_
def _make_env():
- return make_(id, **kwargs)
+ env = make_(id, **kwargs)
+ if wrappers is not None:
+ if callable(wrappers):
+ env = wrappers(env)
+ elif isinstance(wrappers, Iterable) and all([callable(w) for w in wrappers]):
+ for wrapper in wrappers:
+ env = wrapper(env)
+ else:
+ raise NotImplementedError
+ return env
env_fns = [_make_env for _ in range(num_envs)]
return AsyncVectorEnv(env_fns) if asynchronous else SyncVectorEnv(env_fns)
| Fix #1560 | https://api.github.com/repos/openai/gym/pulls/1573 | 2019-06-28T19:30:03Z | 2019-07-23T19:34:40Z | 2019-07-23T19:34:40Z | 2019-07-23T19:34:40Z | 493 | openai/gym | 5,601 |
Add --ignored Flag to Exclude Specific Files and Directories During Ingestion | diff --git a/.gitignore b/.gitignore
index bc7d21253..847a30db3 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,4 +1,6 @@
.venv
+.env
+venv
settings-me.yaml
diff --git a/Makefile b/Makefile
index a2e2d8d3c..67b76e40a 100644
--- a/Makefile
+++ b/Makefile
@@ -56,3 +56,20 @@ wipe:
setup:
poetry run python scripts/setup
+
+list:
+ @echo "Available commands:"
+ @echo " test : Run tests using pytest"
+ @echo " test-coverage : Run tests with coverage report"
+ @echo " black : Check code format with black"
+ @echo " ruff : Check code with ruff"
+ @echo " format : Format code with black and ruff"
+ @echo " mypy : Run mypy for type checking"
+ @echo " check : Run format and mypy commands"
+ @echo " run : Run the application"
+ @echo " dev-windows : Run the application in development mode on Windows"
+ @echo " dev : Run the application in development mode"
+ @echo " api-docs : Generate API documentation"
+ @echo " ingest : Ingest data using specified script"
+ @echo " wipe : Wipe data using specified script"
+ @echo " setup : Setup the application"
diff --git a/scripts/ingest_folder.py b/scripts/ingest_folder.py
index fc1740a27..8c6acad1c 100755
--- a/scripts/ingest_folder.py
+++ b/scripts/ingest_folder.py
@@ -20,20 +20,20 @@ def __init__(self, ingest_service: IngestService) -> None:
self._files_under_root_folder: list[Path] = list()
- def _find_all_files_in_folder(self, root_path: Path) -> None:
+ def _find_all_files_in_folder(self, root_path: Path, ignored: list[str]) -> None:
"""Search all files under the root folder recursively.
Count them at the same time
"""
for file_path in root_path.iterdir():
- if file_path.is_file():
+ if file_path.is_file() and file_path.name not in ignored:
self.total_documents += 1
self._files_under_root_folder.append(file_path)
- elif file_path.is_dir():
- self._find_all_files_in_folder(file_path)
+ elif file_path.is_dir() and file_path.name not in ignored:
+ self._find_all_files_in_folder(file_path, ignored)
- def ingest_folder(self, folder_path: Path) -> None:
+ def ingest_folder(self, folder_path: Path, ignored: list[str]) -> None:
# Count total documents before ingestion
- self._find_all_files_in_folder(folder_path)
+ self._find_all_files_in_folder(folder_path, ignored)
self._ingest_all(self._files_under_root_folder)
def _ingest_all(self, files_to_ingest: list[Path]) -> None:
@@ -64,12 +64,19 @@ def _do_ingest_one(self, changed_path: Path) -> None:
action=argparse.BooleanOptionalAction,
default=False,
)
+parser.add_argument(
+ "--ignored",
+ nargs="*",
+ help="List of files/directories to ignore",
+ default=[],
+)
parser.add_argument(
"--log-file",
help="Optional path to a log file. If provided, logs will be written to this file.",
type=str,
default=None,
)
+
args = parser.parse_args()
# Set up logging to a file if a path is provided
@@ -91,9 +98,17 @@ def _do_ingest_one(self, changed_path: Path) -> None:
ingest_service = global_injector.get(IngestService)
worker = LocalIngestWorker(ingest_service)
- worker.ingest_folder(root_path)
+ worker.ingest_folder(root_path, args.ignored)
+
+ if args.ignored:
+ logger.info(f"Skipping following files and directories: {args.ignored}")
if args.watch:
logger.info(f"Watching {args.folder} for changes, press Ctrl+C to stop...")
+ directories_to_watch = [
+ dir
+ for dir in root_path.iterdir()
+ if dir.is_dir() and dir.name not in args.ignored
+ ]
watcher = IngestWatcher(args.folder, worker.ingest_on_watch)
watcher.start()
| This pull request introduces the `--ignored` flag to the ingestion script. The motivation for this change stems from encountering `UnicodeDecodeError` when the script processes Python-generated `__pycache__` folders and MacOS-specific `.DS_Store` files. These files are not relevant to our data processing and can cause errors due to their format.
The `--ignored` flag allows users to specify a list of files or directories to exclude from the ingestion process, enhancing the script's flexibility and reliability, especially in Python and Mac environments. | https://api.github.com/repos/zylon-ai/private-gpt/pulls/1432 | 2023-12-20T16:16:26Z | 2024-02-07T18:59:32Z | 2024-02-07T18:59:32Z | 2024-02-07T18:59:32Z | 1,056 | zylon-ai/private-gpt | 38,579 |
Feature/support chat glm for conditional generation | diff --git a/colossalai/shardformer/policies/chatglm.py b/colossalai/shardformer/policies/chatglm.py
index 934b99b83ea1..46aa3b52af8f 100644
--- a/colossalai/shardformer/policies/chatglm.py
+++ b/colossalai/shardformer/policies/chatglm.py
@@ -90,7 +90,31 @@ def module_policy(self) -> Dict[Union[str, nn.Module], ModulePolicyDescription]:
policy=policy,
target_key=ChatGLMModel)
+ else:
+ self.append_or_create_submodule_replacement(description=[
+ SubModuleReplacementDescription(suffix="input_layernorm", target_module=col_nn.FusedRMSNorm),
+ SubModuleReplacementDescription(suffix="post_attention_layernorm",
+ target_module=col_nn.FusedRMSNorm)
+ ],
+ policy=policy,
+ target_key=GLMBlock)
+
+ if self.model.config.post_layer_norm:
+ self.append_or_create_submodule_replacement(description=[
+ SubModuleReplacementDescription(suffix="encoder.final_layernorm",
+ target_module=col_nn.FusedRMSNorm)
+ ],
+ policy=policy,
+ target_key=ChatGLMModel)
+
return policy
def postprocess(self):
return self.model
+
+
+class ChatGLMForConditionalGenerationPolicy(ChatGLMModelPolicy):
+
+ def module_policy(self):
+ policy = super().module_policy()
+ return policy
diff --git a/colossalai/shardformer/policies/vit.py b/colossalai/shardformer/policies/vit.py
index 7b035afae22c..d45055bc8beb 100644
--- a/colossalai/shardformer/policies/vit.py
+++ b/colossalai/shardformer/policies/vit.py
@@ -2,7 +2,13 @@
import torch.nn as nn
-from colossalai.shardformer.layer import DropoutForReplicatedInput, DropoutForParallelInput, FusedLayerNorm, Linear1D_Col, Linear1D_Row
+from colossalai.shardformer.layer import (
+ DropoutForParallelInput,
+ DropoutForReplicatedInput,
+ FusedLayerNorm,
+ Linear1D_Col,
+ Linear1D_Row,
+)
from .basepolicy import ModulePolicyDescription, Policy, SubModuleReplacementDescription
@@ -18,101 +24,112 @@ def preprocess(self):
return self.model
def module_policy(self) -> Dict[Union[str, nn.Module], ModulePolicyDescription]:
- from transformers.models.vit.modeling_vit import ViTEmbeddings, ViTLayer
+ from transformers.models.vit.modeling_vit import ViTEmbeddings, ViTLayer, ViTModel
policy = {}
if self.shard_config.enable_tensor_parallelism:
policy[ViTEmbeddings] = ModulePolicyDescription(attribute_replacement={},
- param_replacement=[],
- sub_module_replacement=[
- SubModuleReplacementDescription(
- suffix="dropout",
- target_module=DropoutForReplicatedInput,
- )
- ])
-
- policy[ViTLayer] = ModulePolicyDescription(
- attribute_replacement={
- "attention.attention.num_attention_heads":
- self.model.config.num_attention_heads//self.shard_config.tensor_parallel_size,
- "attention.attention.all_head_size":
- self.model.config.hidden_size//self.shard_config.tensor_parallel_size,
- },
- param_replacement=[],
- sub_module_replacement=[
- SubModuleReplacementDescription(
- suffix="attention.attention.query",
- target_module=Linear1D_Col,
- ),
- SubModuleReplacementDescription(
- suffix="attention.attention.key",
- target_module=Linear1D_Col,
- ),
- SubModuleReplacementDescription(
- suffix="attention.attention.value",
- target_module=Linear1D_Col,
- ),
- SubModuleReplacementDescription(
- suffix="attention.attention.dropout",
- target_module=DropoutForParallelInput,
- ),
- SubModuleReplacementDescription(
- suffix="attention.output.dense",
- target_module=Linear1D_Row,
- ),
- SubModuleReplacementDescription(
- suffix="attention.output.dropout",
- target_module=DropoutForReplicatedInput,
- ),
- SubModuleReplacementDescription(
- suffix="intermediate.dense",
- target_module=Linear1D_Col,
- ),
- SubModuleReplacementDescription(
- suffix="output.dense",
- target_module=Linear1D_Row,
- ),
- SubModuleReplacementDescription(
- suffix="output.dropout",
- target_module=DropoutForReplicatedInput,
- ),
- ]
- )
+ param_replacement=[],
+ sub_module_replacement=[
+ SubModuleReplacementDescription(
+ suffix="dropout",
+ target_module=DropoutForReplicatedInput,
+ )
+ ])
+
+ policy[ViTLayer] = ModulePolicyDescription(attribute_replacement={
+ "attention.attention.num_attention_heads":
+ self.model.config.num_attention_heads // self.shard_config.tensor_parallel_size,
+ "attention.attention.all_head_size":
+ self.model.config.hidden_size // self.shard_config.tensor_parallel_size,
+ },
+ param_replacement=[],
+ sub_module_replacement=[
+ SubModuleReplacementDescription(
+ suffix="attention.attention.query",
+ target_module=Linear1D_Col,
+ ),
+ SubModuleReplacementDescription(
+ suffix="attention.attention.key",
+ target_module=Linear1D_Col,
+ ),
+ SubModuleReplacementDescription(
+ suffix="attention.attention.value",
+ target_module=Linear1D_Col,
+ ),
+ SubModuleReplacementDescription(
+ suffix="attention.attention.dropout",
+ target_module=DropoutForParallelInput,
+ ),
+ SubModuleReplacementDescription(
+ suffix="attention.output.dense",
+ target_module=Linear1D_Row,
+ ),
+ SubModuleReplacementDescription(
+ suffix="attention.output.dropout",
+ target_module=DropoutForReplicatedInput,
+ ),
+ SubModuleReplacementDescription(
+ suffix="intermediate.dense",
+ target_module=Linear1D_Col,
+ ),
+ SubModuleReplacementDescription(
+ suffix="output.dense",
+ target_module=Linear1D_Row,
+ ),
+ SubModuleReplacementDescription(
+ suffix="output.dropout",
+ target_module=DropoutForReplicatedInput,
+ ),
+ ])
+
+ if self.shard_config.enable_fused_normalization:
+ policy[ViTModel] = ModulePolicyDescription(attribute_replacement={},
+ param_replacement=[],
+ sub_module_replacement=[
+ SubModuleReplacementDescription(
+ suffix="layernorm",
+ target_module=FusedLayerNorm,
+ )
+ ])
+
+ self.append_or_create_submodule_replacement(description=[
+ SubModuleReplacementDescription(suffix="layernorm_before", target_module=FusedLayerNorm),
+ SubModuleReplacementDescription(suffix="layernorm_after", target_module=FusedLayerNorm)
+ ],
+ policy=policy,
+ target_key=ViTLayer)
return policy
-
-
+
def new_model_class(self):
return None
def postprocess(self):
return self.model
+
class ViTForImageClassificationPolicy(ViTPolicy):
- def module_policy(self):
+ def module_policy(self):
from transformers.models.vit.modeling_vit import ViTForImageClassification
policy = super().module_policy()
if self.shard_config.enable_tensor_parallelism:
new_item = {
ViTForImageClassification:
- ModulePolicyDescription(sub_module_replacement=[
- SubModuleReplacementDescription(suffix="classifier",
- target_module=Linear1D_Col,
- kwargs=dict(gather_output=True))
- ])
+ ModulePolicyDescription(sub_module_replacement=[
+ SubModuleReplacementDescription(
+ suffix="classifier", target_module=Linear1D_Col, kwargs=dict(gather_output=True))
+ ])
}
policy.update(new_item)
return policy
+
class ViTForMaskedImageModelingPolicy(ViTPolicy):
-
+
def module_policy(self):
policy = super().module_policy()
return policy
-
-
-
-
diff --git a/tests/kit/model_zoo/transformers/chatglm.py b/tests/kit/model_zoo/transformers/chatglm.py
index 1408babede64..04e73a832abe 100644
--- a/tests/kit/model_zoo/transformers/chatglm.py
+++ b/tests/kit/model_zoo/transformers/chatglm.py
@@ -3,7 +3,7 @@
from ..registry import ModelAttribute, model_zoo
from .chatglm2_6b.configuration_chatglm import ChatGLMConfig
-from .chatglm2_6b.modeling_chatglm import ChatGLMModel
+from .chatglm2_6b.modeling_chatglm import ChatGLMForConditionalGeneration, ChatGLMModel
# ================================
# Register single-sentence ChatGLM
@@ -21,7 +21,7 @@ def data_gen():
# define loss function
loss_fn_for_chatglm_model = lambda x: x.last_hidden_state.mean()
-loss_fn = lambda x: x.loss
+loss_fn = lambda x: x.logits.mean()
config = ChatGLMConfig(num_layers=1,
padded_vocab_size=65024,
hidden_size=64,
@@ -36,3 +36,10 @@ def data_gen():
output_transform_fn=output_transform_fn,
loss_fn=loss_fn_for_chatglm_model,
model_attribute=ModelAttribute(has_control_flow=True))
+
+model_zoo.register(name="transformers_chatglm_for_conditional_generation",
+ model_fn=lambda: ChatGLMForConditionalGeneration(config, empty_init=False),
+ data_gen_fn=data_gen,
+ output_transform_fn=output_transform_fn,
+ loss_fn=loss_fn,
+ model_attribute=ModelAttribute(has_control_flow=True))
diff --git a/tests/test_shardformer/test_model/test_shard_chatglm.py b/tests/test_shardformer/test_model/test_shard_chatglm.py
index 2cdf5da2e6da..a0fa4bd82e74 100644
--- a/tests/test_shardformer/test_model/test_shard_chatglm.py
+++ b/tests/test_shardformer/test_model/test_shard_chatglm.py
@@ -7,7 +7,7 @@
import colossalai
from colossalai.logging import disable_existing_loggers
from colossalai.shardformer import ShardConfig, ShardFormer
-from colossalai.shardformer.policies.chatglm import ChatGLMModelPolicy
+from colossalai.shardformer.policies.chatglm import ChatGLMForConditionalGenerationPolicy, ChatGLMModelPolicy
from colossalai.tensor.d_tensor.api import is_customized_distributed_tensor, is_distributed_tensor
from colossalai.testing import (
assert_hf_output_close,
@@ -85,6 +85,8 @@ def run_chatglm_test(enable_fused_normalization, enable_tensor_parallelism):
shard_former = ShardFormer(shard_config=shard_config)
if name == "transformers_chatglm":
sharded_model = shard_former.optimize(model_copy, ChatGLMModelPolicy()).cuda()
+ else:
+ sharded_model = shard_former.optimize(model_copy, ChatGLMForConditionalGenerationPolicy()).cuda()
check_forward_backward(org_model, sharded_model, data_gen_fn, output_transform_fn, loss_fn)
torch.cuda.empty_cache()
| ## 📌 Checklist before creating the PR
- [ ] I have created an issue for this PR for traceability
- [ ] The title follows the standard format: `[doc/gemini/tensor/...]: A concise description`
- [ ] I have added relevant tags if possible for us to better distinguish different PRs
## 🚨 Issue number
> Link this PR to your issue with words like fixed to automatically close the linked issue upon merge
>
> e.g. `fixed #1234`, `closed #1234`, `resolved #1234`
## 📝 What does this PR do?
> Summarize your work here.
> if you have any plots/diagrams/screenshots/tables, please attach them here.
## 💥 Checklist before requesting a review
- [ ] I have linked my PR to an issue ([instruction](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue))
- [ ] My issue clearly describes the problem/feature/proposal, with diagrams/charts/table/code if possible
- [ ] I have performed a self-review of my code
- [ ] I have added thorough tests.
- [ ] I have added docstrings for all the functions/methods I implemented
## ⭐️ Do you enjoy contributing to Colossal-AI?
- [x] 🌝 Yes, I do.
- [ ] 🌚 No, I don't.
Tell us more if you don't enjoy contributing to Colossal-AI.
| https://api.github.com/repos/hpcaitech/ColossalAI/pulls/4297 | 2023-07-20T11:50:20Z | 2023-07-21T09:20:01Z | 2023-07-21T09:20:01Z | 2023-07-21T09:20:01Z | 2,545 | hpcaitech/ColossalAI | 11,154 |
Update colab readme | diff --git a/readme.md b/readme.md
index 8a91c3208..feb8e3626 100644
--- a/readme.md
+++ b/readme.md
@@ -121,7 +121,7 @@ See also the common problems and troubleshoots [here](troubleshoot.md).
In Colab, you can modify the last line to `!python entry_with_update.py --share` or `!python entry_with_update.py --preset anime --share` or `!python entry_with_update.py --preset realistic --share` for Fooocus Default/Anime/Realistic Edition.
-Note that this Colab will disable refiner by default because Colab free's resource is relatively limited.
+Note that this Colab will disable refiner by default because Colab free's resource is relatively limited (and some "big" features like image prompt may cause free-tier Colab to disconnect). We make sure that basic text-to-image is always working on free-tier Colab.
Thanks to [camenduru](https://github.com/camenduru)!
| https://api.github.com/repos/lllyasviel/Fooocus/pulls/1416 | 2023-12-14T22:50:22Z | 2023-12-14T22:50:35Z | 2023-12-14T22:50:35Z | 2023-12-14T22:50:37Z | 230 | lllyasviel/Fooocus | 7,265 |
|
chore: enhance .gitignore | diff --git a/.gitignore b/.gitignore
index 0dbf39f9f..58c0144c7 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,32 +1,241 @@
-# Ignore the .replit configuration file
-.replit
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+*$py.class
-# Ignore the .env file if you're using one to store sensitive information
-.env
+# C extensions
+*.so
-# Ignore Python virtual environment directories
-venv/
-__pycache__/
+# Distribution / packaging
+.Python
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+wheels/
+share/python-wheels/
+*.egg-info/
+.installed.cfg
+*.egg
+MANIFEST
-# Ignore Nix directories
-nix/
+# PyInstaller
+# Usually these files are written by a python script from a template
+# before PyInstaller builds the exe, so as to inject date/other infos into it.
+*.manifest
+*.spec
-# Ignore the replit.nix configuration file
-replit.nix
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.nox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*.cover
+*.py,cover
+.hypothesis/
+.pytest_cache/
+cover/
-# Ignore any logs
+# Translations
+*.mo
+*.pot
+
+# Django stuff:
*.log
+local_settings.py
+db.sqlite3
+db.sqlite3-journal
+
+# Flask stuff:
+instance/
+.webassets-cache
+
+# Scrapy stuff:
+.scrapy
+
+# Sphinx documentation
+docs/_build/
+
+# PyBuilder
+.pybuilder/
+target/
+
+# Jupyter Notebook
+.ipynb_checkpoints
+
+# IPython
+profile_default/
+ipython_config.py
+
+# pyenv
+# For a library or package, you might want to ignore these files since the code is
+# intended to run in multiple environments; otherwise, check them in:
+# .python-version
+
+# pipenv
+# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
+# However, in case of collaboration, if having platform-specific dependencies or dependencies
+# having no cross-platform support, pipenv may install dependencies that don't work, or not
+# install all needed dependencies.
+#Pipfile.lock
+
+# poetry
+# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
+# This is especially recommended for binary packages to ensure reproducibility, and is more
+# commonly ignored for libraries.
+# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
+#poetry.lock
+
+# pdm
+# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
+#pdm.lock
+# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
+# in version control.
+# https://pdm.fming.dev/#use-with-ide
+.pdm.toml
+
+# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
+__pypackages__/
+
+# Celery stuff
+celerybeat-schedule
+celerybeat.pid
+
+# SageMath parsed files
+*.sage.py
+
+# Environments
+.env
+.venv
+env/
+venv/
+ENV/
+env.bak/
+venv.bak/
+
+# Spyder project settings
+.spyderproject
+.spyproject
+
+# Rope project settings
+.ropeproject
-# Ignore all .DS_Store files
+# mkdocs documentation
+/site
+
+# mypy
+.mypy_cache/
+.dmypy.json
+dmypy.json
+
+# Pyre type checker
+.pyre/
+
+# pytype static type analyzer
+.pytype/
+
+# Cython debug symbols
+cython_debug/
+
+# PyCharm
+# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
+# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
+# and can be added to the global gitignore or merged into this file. For a more nuclear
+# option (not recommended) you can uncomment the following to ignore the entire idea folder.
+#.idea/
+
+# General
.DS_Store
+.AppleDouble
+.LSOverride
-# Ignore dist directory
-dist/
+# Icon must end with two \r
+Icon
+
+
+# Thumbnails
+._*
+
+# Files that might appear in the root of a volume
+.DocumentRevisions-V100
+.fseventsd
+.Spotlight-V100
+.TemporaryItems
+.Trashes
+.VolumeIcon.icns
+.com.apple.timemachine.donotpresent
+
+# Directories potentially created on remote AFP share
+.AppleDB
+.AppleDesktop
+Network Trash Folder
+Temporary Items
+.apdisk
+
+# Windows thumbnail cache files
+Thumbs.db
+Thumbs.db:encryptable
+ehthumbs.db
+ehthumbs_vista.db
+
+# Dump file
+*.stackdump
+
+# Folder config file
+[Dd]esktop.ini
+
+# Recycle Bin used on file shares
+$RECYCLE.BIN/
+
+# Windows Installer files
+*.cab
+*.msi
+*.msix
+*.msm
+*.msp
+
+# Windows shortcuts
+*.lnk
+
+.vscode/*
+!.vscode/settings.json
+!.vscode/tasks.json
+!.vscode/launch.json
+!.vscode/extensions.json
+!.vscode/*.code-snippets
+
+# Local History for Visual Studio Code
+.history/
+
+# Built Visual Studio Code Extensions
+*.vsix
+
+# Ignore the .replit configuration file
+.replit
+
+# Ignore Nix directories
+nix/
+
+# Ignore the replit.nix configuration file
+replit.nix
# Ignore misc directory
misc/
-.vscode/
-
# Ignore litellm_uuid.txt
-litellm_uuid.txt
\ No newline at end of file
+litellm_uuid.txt
| ### Describe the changes you have made:
Enhanced the `.gitignore` file to catch more things that shouldn't be committed.
I mainly did this because `poetry config virtualenvs.in-project true` needs to be set for vscode debugging, which puts a `.venv` folder into the project root. Since `.venv` wasn't in `.gitignore`, I decided to use the [gitignore](https://marketplace.visualstudio.com/items?itemName=codezombiech.gitignore) vscode extension to add the recommended entries for Python, macOS, and Windows, and merge them with the ones that were already there.
- [x] I have performed a self-review of my code:
### I have tested the code on the following OS:
- [x] Windows
- [ ] MacOS
- [ ] Linux
| https://api.github.com/repos/OpenInterpreter/open-interpreter/pulls/374 | 2023-09-15T04:47:14Z | 2023-09-15T04:47:22Z | 2023-09-15T04:47:22Z | 2023-09-15T04:47:23Z | 1,451 | OpenInterpreter/open-interpreter | 40,847 |