Datasets:

Modalities:
Text
Formats:
json
Size:
n<1K
ArXiv:
Tags:
testbed_environment
stringclasses
2 values
modified_files
listlengths
0
6
repo_url
stringclasses
14 values
base_commit
stringlengths
7
7
requirements_txt
stringclasses
14 values
solution_patch
stringlengths
238
19.4k
id
stringlengths
3
5
solution_commit
stringlengths
7
9
language
stringclasses
2 values
test_script
stringlengths
553
11.4k
instruction
stringlengths
17
1.08k
python3.9
[ { "content": "import hashlib\nimport netrc\nimport os\nimport re\nimport time\nimport typing\nfrom base64 import b64encode\nfrom urllib.request import parse_http_list\n\nfrom ._exceptions import ProtocolError\nfrom ._models import Request, Response\nfrom ._utils import to_bytes, to_str, unquote\n\nif typing.TYPE_CHECKING: # pragma: no cover\n from hashlib import _Hash\n\n\nclass Auth:\n \"\"\"\n Base class for all authentication schemes.\n\n To implement a custom authentication scheme, subclass `Auth` and override\n the `.auth_flow()` method.\n\n If the authentication scheme does I/O such as disk access or network calls, or uses\n synchronization primitives such as locks, you should override `.sync_auth_flow()`\n and/or `.async_auth_flow()` instead of `.auth_flow()` to provide specialized\n implementations that will be used by `Client` and `AsyncClient` respectively.\n \"\"\"\n\n requires_request_body = False\n requires_response_body = False\n\n def auth_flow(self, request: Request) -> typing.Generator[Request, Response, None]:\n \"\"\"\n Execute the authentication flow.\n\n To dispatch a request, `yield` it:\n\n ```\n yield request\n ```\n\n The client will `.send()` the response back into the flow generator. You can\n access it like so:\n\n ```\n response = yield request\n ```\n\n A `return` (or reaching the end of the generator) will result in the\n client returning the last response obtained from the server.\n\n You can dispatch as many requests as is necessary.\n \"\"\"\n yield request\n\n def sync_auth_flow(\n self, request: Request\n ) -> typing.Generator[Request, Response, None]:\n \"\"\"\n Execute the authentication flow synchronously.\n\n By default, this defers to `.auth_flow()`. You should override this method\n when the authentication scheme does I/O and/or uses concurrency primitives.\n \"\"\"\n if self.requires_request_body:\n request.read()\n\n flow = self.auth_flow(request)\n request = next(flow)\n\n while True:\n response = yield request\n if self.requires_response_body:\n response.read()\n\n try:\n request = flow.send(response)\n except StopIteration:\n break\n\n async def async_auth_flow(\n self, request: Request\n ) -> typing.AsyncGenerator[Request, Response]:\n \"\"\"\n Execute the authentication flow asynchronously.\n\n By default, this defers to `.auth_flow()`. You should override this method\n when the authentication scheme does I/O and/or uses concurrency primitives.\n \"\"\"\n if self.requires_request_body:\n await request.aread()\n\n flow = self.auth_flow(request)\n request = next(flow)\n\n while True:\n response = yield request\n if self.requires_response_body:\n await response.aread()\n\n try:\n request = flow.send(response)\n except StopIteration:\n break\n\n\nclass FunctionAuth(Auth):\n \"\"\"\n Allows the 'auth' argument to be passed as a simple callable function,\n that takes the request, and returns a new, modified request.\n \"\"\"\n\n def __init__(self, func: typing.Callable[[Request], Request]) -> None:\n self._func = func\n\n def auth_flow(self, request: Request) -> typing.Generator[Request, Response, None]:\n yield self._func(request)\n\n\nclass BasicAuth(Auth):\n \"\"\"\n Allows the 'auth' argument to be passed as a (username, password) pair,\n and uses HTTP Basic authentication.\n \"\"\"\n\n def __init__(\n self, username: typing.Union[str, bytes], password: typing.Union[str, bytes]\n ):\n self._auth_header = self._build_auth_header(username, password)\n\n def auth_flow(self, request: Request) -> typing.Generator[Request, Response, None]:\n request.headers[\"Authorization\"] = self._auth_header\n yield request\n\n def _build_auth_header(\n self, username: typing.Union[str, bytes], password: typing.Union[str, bytes]\n ) -> str:\n userpass = b\":\".join((to_bytes(username), to_bytes(password)))\n token = b64encode(userpass).decode()\n return f\"Basic {token}\"\n\n\nclass NetRCAuth(Auth):\n \"\"\"\n Use a 'netrc' file to lookup basic auth credentials based on the url host.\n \"\"\"\n\n def __init__(self, file: typing.Optional[str]):\n self._netrc_info = netrc.netrc(file)\n\n def auth_flow(self, request: Request) -> typing.Generator[Request, Response, None]:\n auth_info = self._netrc_info.authenticators(request.url.host)\n if auth_info is None or not auth_info[2]:\n # The netrc file did not have authentication credentials for this host.\n yield request\n else:\n # Build a basic auth header with credentials from the netrc file.\n request.headers[\"Authorization\"] = self._build_auth_header(\n username=auth_info[0], password=auth_info[2]\n )\n yield request\n\n def _build_auth_header(\n self, username: typing.Union[str, bytes], password: typing.Union[str, bytes]\n ) -> str:\n userpass = b\":\".join((to_bytes(username), to_bytes(password)))\n token = b64encode(userpass).decode()\n return f\"Basic {token}\"\n\n\nclass DigestAuth(Auth):\n _ALGORITHM_TO_HASH_FUNCTION: typing.Dict[str, typing.Callable[[bytes], \"_Hash\"]] = {\n \"MD5\": hashlib.md5,\n \"MD5-SESS\": hashlib.md5,\n \"SHA\": hashlib.sha1,\n \"SHA-SESS\": hashlib.sha1,\n \"SHA-256\": hashlib.sha256,\n \"SHA-256-SESS\": hashlib.sha256,\n \"SHA-512\": hashlib.sha512,\n \"SHA-512-SESS\": hashlib.sha512,\n }\n\n def __init__(\n self, username: typing.Union[str, bytes], password: typing.Union[str, bytes]\n ) -> None:\n self._username = to_bytes(username)\n self._password = to_bytes(password)\n self._last_challenge: typing.Optional[_DigestAuthChallenge] = None\n self._nonce_count = 1\n\n def auth_flow(self, request: Request) -> typing.Generator[Request, Response, None]:\n if self._last_challenge:\n request.headers[\"Authorization\"] = self._build_auth_header(\n request, self._last_challenge\n )\n\n response = yield request\n\n if response.status_code != 401 or \"www-authenticate\" not in response.headers:\n # If the response is not a 401 then we don't\n # need to build an authenticated request.\n return\n\n for auth_header in response.headers.get_list(\"www-authenticate\"):\n if auth_header.lower().startswith(\"digest \"):\n break\n else:\n # If the response does not include a 'WWW-Authenticate: Digest ...'\n # header, then we don't need to build an authenticated request.\n return\n\n self._last_challenge = self._parse_challenge(request, response, auth_header)\n self._nonce_count = 1\n\n request.headers[\"Authorization\"] = self._build_auth_header(\n request, self._last_challenge\n )\n yield request\n\n def _parse_challenge(\n self, request: Request, response: Response, auth_header: str\n ) -> \"_DigestAuthChallenge\":\n \"\"\"\n Returns a challenge from a Digest WWW-Authenticate header.\n These take the form of:\n `Digest realm=\"realm@host.com\",qop=\"auth,auth-int\",nonce=\"abc\",opaque=\"xyz\"`\n \"\"\"\n scheme, _, fields = auth_header.partition(\" \")\n\n # This method should only ever have been called with a Digest auth header.\n assert scheme.lower() == \"digest\"\n\n header_dict: typing.Dict[str, str] = {}\n for field in parse_http_list(fields):\n key, value = field.strip().split(\"=\", 1)\n header_dict[key] = unquote(value)\n\n try:\n realm = header_dict[\"realm\"].encode()\n nonce = header_dict[\"nonce\"].encode()\n algorithm = header_dict.get(\"algorithm\", \"MD5\")\n opaque = header_dict[\"opaque\"].encode() if \"opaque\" in header_dict else None\n qop = header_dict[\"qop\"].encode() if \"qop\" in header_dict else None\n return _DigestAuthChallenge(\n realm=realm, nonce=nonce, algorithm=algorithm, opaque=opaque, qop=qop\n )\n except KeyError as exc:\n message = \"Malformed Digest WWW-Authenticate header\"\n raise ProtocolError(message, request=request) from exc\n\n def _build_auth_header(\n self, request: Request, challenge: \"_DigestAuthChallenge\"\n ) -> str:\n hash_func = self._ALGORITHM_TO_HASH_FUNCTION[challenge.algorithm.upper()]\n\n def digest(data: bytes) -> bytes:\n return hash_func(data).hexdigest().encode()\n\n A1 = b\":\".join((self._username, challenge.realm, self._password))\n\n path = request.url.raw_path\n A2 = b\":\".join((request.method.encode(), path))\n # TODO: implement auth-int\n HA2 = digest(A2)\n\n nc_value = b\"%08x\" % self._nonce_count\n cnonce = self._get_client_nonce(self._nonce_count, challenge.nonce)\n self._nonce_count += 1\n\n HA1 = digest(A1)\n if challenge.algorithm.lower().endswith(\"-sess\"):\n HA1 = digest(b\":\".join((HA1, challenge.nonce, cnonce)))\n\n qop = self._resolve_qop(challenge.qop, request=request)\n if qop is None:\n digest_data = [HA1, challenge.nonce, HA2]\n else:\n digest_data = [challenge.nonce, nc_value, cnonce, qop, HA2]\n key_digest = b\":\".join(digest_data)\n\n format_args = {\n \"username\": self._username,\n \"realm\": challenge.realm,\n \"nonce\": challenge.nonce,\n \"uri\": path,\n \"response\": digest(b\":\".join((HA1, key_digest))),\n \"algorithm\": challenge.algorithm.encode(),\n }\n if challenge.opaque:\n format_args[\"opaque\"] = challenge.opaque\n if qop:\n format_args[\"qop\"] = b\"auth\"\n format_args[\"nc\"] = nc_value\n format_args[\"cnonce\"] = cnonce\n\n return \"Digest \" + self._get_header_value(format_args)\n\n def _get_client_nonce(self, nonce_count: int, nonce: bytes) -> bytes:\n s = str(nonce_count).encode()\n s += nonce\n s += time.ctime().encode()\n s += os.urandom(8)\n\n return hashlib.sha1(s).hexdigest()[:16].encode()\n\n def _get_header_value(self, header_fields: typing.Dict[str, bytes]) -> str:\n NON_QUOTED_FIELDS = (\"algorithm\", \"qop\", \"nc\")\n QUOTED_TEMPLATE = '{}=\"{}\"'\n NON_QUOTED_TEMPLATE = \"{}={}\"\n\n header_value = \"\"\n for i, (field, value) in enumerate(header_fields.items()):\n if i > 0:\n header_value += \", \"\n template = (\n QUOTED_TEMPLATE\n if field not in NON_QUOTED_FIELDS\n else NON_QUOTED_TEMPLATE\n )\n header_value += template.format(field, to_str(value))\n\n return header_value\n\n def _resolve_qop(\n self, qop: typing.Optional[bytes], request: Request\n ) -> typing.Optional[bytes]:\n if qop is None:\n return None\n qops = re.split(b\", ?\", qop)\n if b\"auth\" in qops:\n return b\"auth\"\n\n if qops == [b\"auth-int\"]:\n raise NotImplementedError(\"Digest auth-int support is not yet implemented\")\n\n message = f'Unexpected qop value \"{qop!r}\" in digest auth'\n raise ProtocolError(message, request=request)\n\n\nclass _DigestAuthChallenge(typing.NamedTuple):\n realm: bytes\n nonce: bytes\n algorithm: str\n opaque: typing.Optional[bytes]\n qop: typing.Optional[bytes]\n", "path": "httpx/_auth.py" } ]
https://github.com/teamqurrent/httpx
4b5a92e
sniffio rfc3986 httpcore>=0.18.0,<0.19.0 certifi idna
diff --git a/httpx/_auth.py b/httpx/_auth.py --- a/httpx/_auth.py +++ b/httpx/_auth.py @@ -147,7 +147,7 @@ class NetRCAuth(Auth): Use a 'netrc' file to lookup basic auth credentials based on the url host. """ - def __init__(self, file: typing.Optional[str]): + def __init__(self, file: typing.Optional[str] = None): self._netrc_info = netrc.netrc(file) def auth_flow(self, request: Request) -> typing.Generator[Request, Response, None]:
0_0
c1cc6b2
python
import sys import unittest import inspect class TestNetRCAuthFileParam(unittest.TestCase): def test_netrcauth_file_param_default(self): from httpx._auth import NetRCAuth if hasattr(NetRCAuth, "__init__"): init_method = getattr(NetRCAuth, "__init__") method_signature = inspect.signature(init_method) if "file" in method_signature.parameters: param = method_signature.parameters["file"] self.assertIs(param.default, None, "Default value for 'file' parameter is not None") else: self.fail("The 'file' parameter is not present in NetRCAuth.__init__ method.") else: self.fail("NetRCAuth does not have an __init__ method.") def main(): suite = unittest.TestSuite() suite.addTests(unittest.TestLoader().loadTestsFromTestCase(TestNetRCAuthFileParam)) runner = unittest.TextTestRunner() if runner.run(suite).wasSuccessful(): sys.exit(0) else: sys.exit(1) if __name__ == "__main__": main()
Add None as default value for file parameter inside `httpx/_auth.py` class constructor
python3.9
[ { "content": "import binascii\nimport io\nimport os\nimport typing\nfrom pathlib import Path\n\nfrom ._types import (\n AsyncByteStream,\n FileContent,\n FileTypes,\n RequestFiles,\n SyncByteStream,\n)\nfrom ._utils import (\n format_form_param,\n guess_content_type,\n peek_filelike_length,\n primitive_value_to_str,\n to_bytes,\n)\n\n\ndef get_multipart_boundary_from_content_type(\n content_type: typing.Optional[bytes],\n) -> typing.Optional[bytes]:\n if not content_type or not content_type.startswith(b\"multipart/form-data\"):\n return None\n # parse boundary according to\n # https://www.rfc-editor.org/rfc/rfc2046#section-5.1.1\n if b\";\" in content_type:\n for section in content_type.split(b\";\"):\n if section.strip().lower().startswith(b\"boundary=\"):\n return section.strip()[len(b\"boundary=\") :].strip(b'\"')\n return None\n\n\nclass DataField:\n \"\"\"\n A single form field item, within a multipart form field.\n \"\"\"\n\n def __init__(\n self, name: str, value: typing.Union[str, bytes, int, float, None]\n ) -> None:\n if not isinstance(name, str):\n raise TypeError(\n f\"Invalid type for name. Expected str, got {type(name)}: {name!r}\"\n )\n if value is not None and not isinstance(value, (str, bytes, int, float)):\n raise TypeError(\n f\"Invalid type for value. Expected primitive type, got {type(value)}: {value!r}\"\n )\n self.name = name\n self.value: typing.Union[str, bytes] = (\n value if isinstance(value, bytes) else primitive_value_to_str(value)\n )\n\n def render_headers(self) -> bytes:\n if not hasattr(self, \"_headers\"):\n name = format_form_param(\"name\", self.name)\n self._headers = b\"\".join(\n [b\"Content-Disposition: form-data; \", name, b\"\\r\\n\\r\\n\"]\n )\n\n return self._headers\n\n def render_data(self) -> bytes:\n if not hasattr(self, \"_data\"):\n self._data = to_bytes(self.value)\n\n return self._data\n\n def get_length(self) -> int:\n headers = self.render_headers()\n data = self.render_data()\n return len(headers) + len(data)\n\n def render(self) -> typing.Iterator[bytes]:\n yield self.render_headers()\n yield self.render_data()\n\n\nclass FileField:\n \"\"\"\n A single file field item, within a multipart form field.\n \"\"\"\n\n CHUNK_SIZE = 64 * 1024\n\n def __init__(self, name: str, value: FileTypes) -> None:\n self.name = name\n\n fileobj: FileContent\n\n headers: typing.Dict[str, str] = {}\n content_type: typing.Optional[str] = None\n\n # This large tuple based API largely mirror's requests' API\n # It would be good to think of better APIs for this that we could include in httpx 2.0\n # since variable length tuples (especially of 4 elements) are quite unwieldly\n if isinstance(value, tuple):\n if len(value) == 2:\n # neither the 3rd parameter (content_type) nor the 4th (headers) was included\n filename, fileobj = value # type: ignore\n elif len(value) == 3:\n filename, fileobj, content_type = value # type: ignore\n else:\n # all 4 parameters included\n filename, fileobj, content_type, headers = value # type: ignore\n else:\n filename = Path(str(getattr(value, \"name\", \"upload\"))).name\n fileobj = value\n\n if content_type is None:\n content_type = guess_content_type(filename)\n\n has_content_type_header = any(\"content-type\" in key.lower() for key in headers)\n if content_type is not None and not has_content_type_header:\n # note that unlike requests, we ignore the content_type\n # provided in the 3rd tuple element if it is also included in the headers\n # requests does the opposite (it overwrites the header with the 3rd tuple element)\n headers[\"Content-Type\"] = content_type\n\n if isinstance(fileobj, (str, io.StringIO)):\n raise TypeError(f\"Expected bytes or bytes-like object got: {type(fileobj)}\")\n\n self.filename = filename\n self.file = fileobj\n self.headers = headers\n\n def get_length(self) -> int:\n headers = self.render_headers()\n\n if isinstance(self.file, (str, bytes)):\n return len(headers) + len(to_bytes(self.file))\n\n # Let's do our best not to read `file` into memory.\n file_length = peek_filelike_length(self.file)\n if file_length is None:\n # As a last resort, read file and cache contents for later.\n assert not hasattr(self, \"_data\")\n self._data = to_bytes(self.file.read())\n file_length = len(self._data)\n\n return len(headers) + file_length\n\n def render_headers(self) -> bytes:\n if not hasattr(self, \"_headers\"):\n parts = [\n b\"Content-Disposition: form-data; \",\n format_form_param(\"name\", self.name),\n ]\n if self.filename:\n filename = format_form_param(\"filename\", self.filename)\n parts.extend([b\"; \", filename])\n for header_name, header_value in self.headers.items():\n key, val = f\"\\r\\n{header_name}: \".encode(), header_value.encode()\n parts.extend([key, val])\n parts.append(b\"\\r\\n\\r\\n\")\n self._headers = b\"\".join(parts)\n\n return self._headers\n\n def render_data(self) -> typing.Iterator[bytes]:\n if isinstance(self.file, (str, bytes)):\n yield to_bytes(self.file)\n return\n\n if hasattr(self, \"_data\"):\n # Already rendered.\n yield self._data\n return\n\n if hasattr(self.file, \"seek\"):\n self.file.seek(0)\n\n chunk = self.file.read(self.CHUNK_SIZE)\n while chunk:\n yield to_bytes(chunk)\n chunk = self.file.read(self.CHUNK_SIZE)\n\n def render(self) -> typing.Iterator[bytes]:\n yield self.render_headers()\n yield from self.render_data()\n\n\nclass MultipartStream(SyncByteStream, AsyncByteStream):\n \"\"\"\n Request content as streaming multipart encoded form data.\n \"\"\"\n\n def __init__(\n self, data: dict, files: RequestFiles, boundary: typing.Optional[bytes] = None\n ) -> None:\n if boundary is None:\n boundary = binascii.hexlify(os.urandom(16))\n\n self.boundary = boundary\n self.content_type = \"multipart/form-data; boundary=%s\" % boundary.decode(\n \"ascii\"\n )\n self.fields = list(self._iter_fields(data, files))\n\n def _iter_fields(\n self, data: dict, files: RequestFiles\n ) -> typing.Iterator[typing.Union[FileField, DataField]]:\n for name, value in data.items():\n if isinstance(value, list):\n for item in value:\n yield DataField(name=name, value=item)\n else:\n yield DataField(name=name, value=value)\n\n file_items = files.items() if isinstance(files, typing.Mapping) else files\n for name, value in file_items:\n yield FileField(name=name, value=value)\n\n def iter_chunks(self) -> typing.Iterator[bytes]:\n for field in self.fields:\n yield b\"--%s\\r\\n\" % self.boundary\n yield from field.render()\n yield b\"\\r\\n\"\n yield b\"--%s--\\r\\n\" % self.boundary\n\n def iter_chunks_lengths(self) -> typing.Iterator[int]:\n boundary_length = len(self.boundary)\n # Follow closely what `.iter_chunks()` does.\n for field in self.fields:\n yield 2 + boundary_length + 2\n yield field.get_length()\n yield 2\n yield 2 + boundary_length + 4\n\n def get_content_length(self) -> int:\n return sum(self.iter_chunks_lengths())\n\n # Content stream interface.\n\n def get_headers(self) -> typing.Dict[str, str]:\n content_length = str(self.get_content_length())\n content_type = self.content_type\n return {\"Content-Length\": content_length, \"Content-Type\": content_type}\n\n def __iter__(self) -> typing.Iterator[bytes]:\n for chunk in self.iter_chunks():\n yield chunk\n\n async def __aiter__(self) -> typing.AsyncIterator[bytes]:\n for chunk in self.iter_chunks():\n yield chunk\n", "path": "httpx/_multipart.py" } ]
https://github.com/teamqurrent/httpx
ccd98b1
sniffio rfc3986 httpcore>=0.18.0,<0.19.0 certifi idna
diff --git a/httpx/_multipart.py b/httpx/_multipart.py --- a/httpx/_multipart.py +++ b/httpx/_multipart.py @@ -205,7 +205,7 @@ class MultipartStream(SyncByteStream, AsyncByteStream): self, data: dict, files: RequestFiles ) -> typing.Iterator[typing.Union[FileField, DataField]]: for name, value in data.items(): - if isinstance(value, list): + if isinstance(value, (tuple, list)): for item in value: yield DataField(name=name, value=item) else:
0_1
965b8ad
python
import sys import unittest import inspect class TestMultipartStreamIterFields(unittest.TestCase): def test_iter_fields_code(self): from httpx._multipart import MultipartStream source_lines = inspect.getsourcelines(MultipartStream._iter_fields) found_isinstance_tuple = any("isinstance" in line and "tuple" in line for line in source_lines[0]) self.assertTrue(found_isinstance_tuple, "The line with 'isinstance' and 'tuple' was not found in MultipartStream._iter_fields") def main(): suite = unittest.TestSuite() suite.addTests(unittest.TestLoader().loadTestsFromTestCase(TestMultipartStreamIterFields)) runner = unittest.TextTestRunner() if runner.run(suite).wasSuccessful(): sys.exit(0) else: sys.exit(1) if __name__ == "__main__": main()
Allow tuple or list for multipart values inside `httpx/_multipart.py` in `_iter_fields` method
python3.9
[ { "content": "import codecs\nimport email.message\nimport logging\nimport mimetypes\nimport netrc\nimport os\nimport re\nimport sys\nimport time\nimport typing\nfrom pathlib import Path\nfrom urllib.request import getproxies\n\nimport sniffio\n\nfrom ._types import PrimitiveData\n\nif typing.TYPE_CHECKING: # pragma: no cover\n from ._urls import URL\n\n\n_HTML5_FORM_ENCODING_REPLACEMENTS = {'\"': \"%22\", \"\\\\\": \"\\\\\\\\\"}\n_HTML5_FORM_ENCODING_REPLACEMENTS.update(\n {chr(c): \"%{:02X}\".format(c) for c in range(0x1F + 1) if c != 0x1B}\n)\n_HTML5_FORM_ENCODING_RE = re.compile(\n r\"|\".join([re.escape(c) for c in _HTML5_FORM_ENCODING_REPLACEMENTS.keys()])\n)\n\n\ndef normalize_header_key(\n value: typing.Union[str, bytes],\n lower: bool,\n encoding: typing.Optional[str] = None,\n) -> bytes:\n \"\"\"\n Coerce str/bytes into a strictly byte-wise HTTP header key.\n \"\"\"\n if isinstance(value, bytes):\n bytes_value = value\n else:\n bytes_value = value.encode(encoding or \"ascii\")\n\n return bytes_value.lower() if lower else bytes_value\n\n\ndef normalize_header_value(\n value: typing.Union[str, bytes], encoding: typing.Optional[str] = None\n) -> bytes:\n \"\"\"\n Coerce str/bytes into a strictly byte-wise HTTP header value.\n \"\"\"\n if isinstance(value, bytes):\n return value\n return value.encode(encoding or \"ascii\")\n\n\ndef primitive_value_to_str(value: \"PrimitiveData\") -> str:\n \"\"\"\n Coerce a primitive data type into a string value.\n\n Note that we prefer JSON-style 'true'/'false' for boolean values here.\n \"\"\"\n if value is True:\n return \"true\"\n elif value is False:\n return \"false\"\n elif value is None:\n return \"\"\n return str(value)\n\n\ndef is_known_encoding(encoding: str) -> bool:\n \"\"\"\n Return `True` if `encoding` is a known codec.\n \"\"\"\n try:\n codecs.lookup(encoding)\n except LookupError:\n return False\n return True\n\n\ndef format_form_param(name: str, value: str) -> bytes:\n \"\"\"\n Encode a name/value pair within a multipart form.\n \"\"\"\n\n def replacer(match: typing.Match[str]) -> str:\n return _HTML5_FORM_ENCODING_REPLACEMENTS[match.group(0)]\n\n value = _HTML5_FORM_ENCODING_RE.sub(replacer, value)\n return f'{name}=\"{value}\"'.encode()\n\n\n# Null bytes; no need to recreate these on each call to guess_json_utf\n_null = b\"\\x00\"\n_null2 = _null * 2\n_null3 = _null * 3\n\n\ndef guess_json_utf(data: bytes) -> typing.Optional[str]:\n # JSON always starts with two ASCII characters, so detection is as\n # easy as counting the nulls and from their location and count\n # determine the encoding. Also detect a BOM, if present.\n sample = data[:4]\n if sample in (codecs.BOM_UTF32_LE, codecs.BOM_UTF32_BE):\n return \"utf-32\" # BOM included\n if sample[:3] == codecs.BOM_UTF8:\n return \"utf-8-sig\" # BOM included, MS style (discouraged)\n if sample[:2] in (codecs.BOM_UTF16_LE, codecs.BOM_UTF16_BE):\n return \"utf-16\" # BOM included\n nullcount = sample.count(_null)\n if nullcount == 0:\n return \"utf-8\"\n if nullcount == 2:\n if sample[::2] == _null2: # 1st and 3rd are null\n return \"utf-16-be\"\n if sample[1::2] == _null2: # 2nd and 4th are null\n return \"utf-16-le\"\n # Did not detect 2 valid UTF-16 ascii-range characters\n if nullcount == 3:\n if sample[:3] == _null3:\n return \"utf-32-be\"\n if sample[1:] == _null3:\n return \"utf-32-le\"\n # Did not detect a valid UTF-32 ascii-range character\n return None\n\n\nclass NetRCInfo:\n def __init__(self, files: typing.Optional[typing.List[str]] = None) -> None:\n if files is None:\n files = [os.getenv(\"NETRC\", \"\"), \"~/.netrc\", \"~/_netrc\"]\n self.netrc_files = files\n\n @property\n def netrc_info(self) -> typing.Optional[netrc.netrc]:\n if not hasattr(self, \"_netrc_info\"):\n self._netrc_info = None\n for file_path in self.netrc_files:\n expanded_path = Path(file_path).expanduser()\n try:\n if expanded_path.is_file():\n self._netrc_info = netrc.netrc(str(expanded_path))\n break\n except (netrc.NetrcParseError, IOError): # pragma: no cover\n # Issue while reading the netrc file, ignore...\n pass\n return self._netrc_info\n\n def get_credentials(self, host: str) -> typing.Optional[typing.Tuple[str, str]]:\n if self.netrc_info is None:\n return None\n\n auth_info = self.netrc_info.authenticators(host)\n if auth_info is None or auth_info[2] is None:\n return None\n return (auth_info[0], auth_info[2])\n\n\ndef get_ca_bundle_from_env() -> typing.Optional[str]:\n if \"SSL_CERT_FILE\" in os.environ:\n ssl_file = Path(os.environ[\"SSL_CERT_FILE\"])\n if ssl_file.is_file():\n return str(ssl_file)\n if \"SSL_CERT_DIR\" in os.environ:\n ssl_path = Path(os.environ[\"SSL_CERT_DIR\"])\n if ssl_path.is_dir():\n return str(ssl_path)\n return None\n\n\ndef parse_header_links(value: str) -> typing.List[typing.Dict[str, str]]:\n \"\"\"\n Returns a list of parsed link headers, for more info see:\n https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Link\n The generic syntax of those is:\n Link: < uri-reference >; param1=value1; param2=\"value2\"\n So for instance:\n Link; '<http:/.../front.jpeg>; type=\"image/jpeg\",<http://.../back.jpeg>;'\n would return\n [\n {\"url\": \"http:/.../front.jpeg\", \"type\": \"image/jpeg\"},\n {\"url\": \"http://.../back.jpeg\"},\n ]\n :param value: HTTP Link entity-header field\n :return: list of parsed link headers\n \"\"\"\n links: typing.List[typing.Dict[str, str]] = []\n replace_chars = \" '\\\"\"\n value = value.strip(replace_chars)\n if not value:\n return links\n for val in re.split(\", *<\", value):\n try:\n url, params = val.split(\";\", 1)\n except ValueError:\n url, params = val, \"\"\n link = {\"url\": url.strip(\"<> '\\\"\")}\n for param in params.split(\";\"):\n try:\n key, value = param.split(\"=\")\n except ValueError:\n break\n link[key.strip(replace_chars)] = value.strip(replace_chars)\n links.append(link)\n return links\n\n\ndef parse_content_type_charset(content_type: str) -> typing.Optional[str]:\n # We used to use `cgi.parse_header()` here, but `cgi` became a dead battery.\n # See: https://peps.python.org/pep-0594/#cgi\n msg = email.message.Message()\n msg[\"content-type\"] = content_type\n return msg.get_content_charset(failobj=None)\n\n\nSENSITIVE_HEADERS = {\"authorization\", \"proxy-authorization\"}\n\n\ndef obfuscate_sensitive_headers(\n items: typing.Iterable[typing.Tuple[typing.AnyStr, typing.AnyStr]]\n) -> typing.Iterator[typing.Tuple[typing.AnyStr, typing.AnyStr]]:\n for k, v in items:\n if to_str(k.lower()) in SENSITIVE_HEADERS:\n v = to_bytes_or_str(\"[secure]\", match_type_of=v)\n yield k, v\n\n\n_LOGGER_INITIALIZED = False\nTRACE_LOG_LEVEL = 5\n\n\nclass Logger(logging.Logger):\n # Stub for type checkers.\n def trace(self, message: str, *args: typing.Any, **kwargs: typing.Any) -> None:\n ... # pragma: no cover\n\n\ndef get_logger(name: str) -> Logger:\n \"\"\"\n Get a `logging.Logger` instance, and optionally\n set up debug logging based on the HTTPX_LOG_LEVEL environment variable.\n \"\"\"\n global _LOGGER_INITIALIZED\n\n if not _LOGGER_INITIALIZED:\n _LOGGER_INITIALIZED = True\n logging.addLevelName(TRACE_LOG_LEVEL, \"TRACE\")\n\n log_level = os.environ.get(\"HTTPX_LOG_LEVEL\", \"\").upper()\n if log_level in (\"DEBUG\", \"TRACE\"):\n logger = logging.getLogger(\"httpx\")\n logger.setLevel(logging.DEBUG if log_level == \"DEBUG\" else TRACE_LOG_LEVEL)\n handler = logging.StreamHandler(sys.stderr)\n handler.setFormatter(\n logging.Formatter(\n fmt=\"%(levelname)s [%(asctime)s] %(name)s - %(message)s\",\n datefmt=\"%Y-%m-%d %H:%M:%S\",\n )\n )\n logger.addHandler(handler)\n\n logger = logging.getLogger(name)\n\n def trace(message: str, *args: typing.Any, **kwargs: typing.Any) -> None:\n logger.log(TRACE_LOG_LEVEL, message, *args, **kwargs)\n\n logger.trace = trace # type: ignore\n\n return typing.cast(Logger, logger)\n\n\ndef port_or_default(url: \"URL\") -> typing.Optional[int]:\n if url.port is not None:\n return url.port\n return {\"http\": 80, \"https\": 443}.get(url.scheme)\n\n\ndef same_origin(url: \"URL\", other: \"URL\") -> bool:\n \"\"\"\n Return 'True' if the given URLs share the same origin.\n \"\"\"\n return (\n url.scheme == other.scheme\n and url.host == other.host\n and port_or_default(url) == port_or_default(other)\n )\n\n\ndef is_https_redirect(url: \"URL\", location: \"URL\") -> bool:\n \"\"\"\n Return 'True' if 'location' is a HTTPS upgrade of 'url'\n \"\"\"\n if url.host != location.host:\n return False\n\n return (\n url.scheme == \"http\"\n and port_or_default(url) == 80\n and location.scheme == \"https\"\n and port_or_default(location) == 443\n )\n\n\ndef get_environment_proxies() -> typing.Dict[str, typing.Optional[str]]:\n \"\"\"Gets proxy information from the environment\"\"\"\n\n # urllib.request.getproxies() falls back on System\n # Registry and Config for proxies on Windows and macOS.\n # We don't want to propagate non-HTTP proxies into\n # our configuration such as 'TRAVIS_APT_PROXY'.\n proxy_info = getproxies()\n mounts: typing.Dict[str, typing.Optional[str]] = {}\n\n for scheme in (\"http\", \"https\", \"all\"):\n if proxy_info.get(scheme):\n hostname = proxy_info[scheme]\n mounts[f\"{scheme}://\"] = (\n hostname if \"://\" in hostname else f\"http://{hostname}\"\n )\n\n no_proxy_hosts = [host.strip() for host in proxy_info.get(\"no\", \"\").split(\",\")]\n for hostname in no_proxy_hosts:\n # See https://curl.haxx.se/libcurl/c/CURLOPT_NOPROXY.html for details\n # on how names in `NO_PROXY` are handled.\n if hostname == \"*\":\n # If NO_PROXY=* is used or if \"*\" occurs as any one of the comma\n # separated hostnames, then we should just bypass any information\n # from HTTP_PROXY, HTTPS_PROXY, ALL_PROXY, and always ignore\n # proxies.\n return {}\n elif hostname:\n # NO_PROXY=.google.com is marked as \"all://*.google.com,\n # which disables \"www.google.com\" but not \"google.com\"\n # NO_PROXY=google.com is marked as \"all://*google.com,\n # which disables \"www.google.com\" and \"google.com\".\n # (But not \"wwwgoogle.com\")\n mounts[f\"all://*{hostname}\"] = None\n\n return mounts\n\n\ndef to_bytes(value: typing.Union[str, bytes], encoding: str = \"utf-8\") -> bytes:\n return value.encode(encoding) if isinstance(value, str) else value\n\n\ndef to_str(value: typing.Union[str, bytes], encoding: str = \"utf-8\") -> str:\n return value if isinstance(value, str) else value.decode(encoding)\n\n\ndef to_bytes_or_str(value: str, match_type_of: typing.AnyStr) -> typing.AnyStr:\n return value if isinstance(match_type_of, str) else value.encode()\n\n\ndef unquote(value: str) -> str:\n return value[1:-1] if value[0] == value[-1] == '\"' else value\n\n\ndef guess_content_type(filename: typing.Optional[str]) -> typing.Optional[str]:\n if filename:\n return mimetypes.guess_type(filename)[0] or \"application/octet-stream\"\n return None\n\n\ndef peek_filelike_length(stream: typing.Any) -> typing.Optional[int]:\n \"\"\"\n Given a file-like stream object, return its length in number of bytes\n without reading it into memory.\n \"\"\"\n try:\n # Is it an actual file?\n fd = stream.fileno()\n # Yup, seems to be an actual file.\n length = os.fstat(fd).st_size\n except (AttributeError, OSError):\n # No... Maybe it's something that supports random access, like `io.BytesIO`?\n try:\n # Assuming so, go to end of stream to figure out its length,\n # then put it back in place.\n offset = stream.tell()\n length = stream.seek(0, os.SEEK_END)\n stream.seek(offset)\n except (AttributeError, OSError):\n # Not even that? Sorry, we're doomed...\n return None\n\n return length\n\n\nclass Timer:\n async def _get_time(self) -> float:\n library = sniffio.current_async_library()\n if library == \"trio\":\n import trio\n\n return trio.current_time()\n elif library == \"curio\": # pragma: no cover\n import curio\n\n return typing.cast(float, await curio.clock())\n\n import asyncio\n\n return asyncio.get_event_loop().time()\n\n def sync_start(self) -> None:\n self.started = time.perf_counter()\n\n async def async_start(self) -> None:\n self.started = await self._get_time()\n\n def sync_elapsed(self) -> float:\n now = time.perf_counter()\n return now - self.started\n\n async def async_elapsed(self) -> float:\n now = await self._get_time()\n return now - self.started\n\n\nclass URLPattern:\n \"\"\"\n A utility class currently used for making lookups against proxy keys...\n\n # Wildcard matching...\n >>> pattern = URLPattern(\"all\")\n >>> pattern.matches(httpx.URL(\"http://example.com\"))\n True\n\n # Witch scheme matching...\n >>> pattern = URLPattern(\"https\")\n >>> pattern.matches(httpx.URL(\"https://example.com\"))\n True\n >>> pattern.matches(httpx.URL(\"http://example.com\"))\n False\n\n # With domain matching...\n >>> pattern = URLPattern(\"https://example.com\")\n >>> pattern.matches(httpx.URL(\"https://example.com\"))\n True\n >>> pattern.matches(httpx.URL(\"http://example.com\"))\n False\n >>> pattern.matches(httpx.URL(\"https://other.com\"))\n False\n\n # Wildcard scheme, with domain matching...\n >>> pattern = URLPattern(\"all://example.com\")\n >>> pattern.matches(httpx.URL(\"https://example.com\"))\n True\n >>> pattern.matches(httpx.URL(\"http://example.com\"))\n True\n >>> pattern.matches(httpx.URL(\"https://other.com\"))\n False\n\n # With port matching...\n >>> pattern = URLPattern(\"https://example.com:1234\")\n >>> pattern.matches(httpx.URL(\"https://example.com:1234\"))\n True\n >>> pattern.matches(httpx.URL(\"https://example.com\"))\n False\n \"\"\"\n\n def __init__(self, pattern: str) -> None:\n from ._urls import URL\n\n if pattern and \":\" not in pattern:\n raise ValueError(\n f\"Proxy keys should use proper URL forms rather \"\n f\"than plain scheme strings. \"\n f'Instead of \"{pattern}\", use \"{pattern}://\"'\n )\n\n url = URL(pattern)\n self.pattern = pattern\n self.scheme = \"\" if url.scheme == \"all\" else url.scheme\n self.host = \"\" if url.host == \"*\" else url.host\n self.port = url.port\n if not url.host or url.host == \"*\":\n self.host_regex: typing.Optional[typing.Pattern[str]] = None\n elif url.host.startswith(\"*.\"):\n # *.example.com should match \"www.example.com\", but not \"example.com\"\n domain = re.escape(url.host[2:])\n self.host_regex = re.compile(f\"^.+\\\\.{domain}$\")\n elif url.host.startswith(\"*\"):\n # *example.com should match \"www.example.com\" and \"example.com\"\n domain = re.escape(url.host[1:])\n self.host_regex = re.compile(f\"^(.+\\\\.)?{domain}$\")\n else:\n # example.com should match \"example.com\" but not \"www.example.com\"\n domain = re.escape(url.host)\n self.host_regex = re.compile(f\"^{domain}$\")\n\n def matches(self, other: \"URL\") -> bool:\n if self.scheme and self.scheme != other.scheme:\n return False\n if (\n self.host\n and self.host_regex is not None\n and not self.host_regex.match(other.host)\n ):\n return False\n if self.port is not None and self.port != other.port:\n return False\n return True\n\n @property\n def priority(self) -> typing.Tuple[int, int, int]:\n \"\"\"\n The priority allows URLPattern instances to be sortable, so that\n we can match from most specific to least specific.\n \"\"\"\n # URLs with a port should take priority over URLs without a port.\n port_priority = 0 if self.port is not None else 1\n # Longer hostnames should match first.\n host_priority = -len(self.host)\n # Longer schemes should match first.\n scheme_priority = -len(self.scheme)\n return (port_priority, host_priority, scheme_priority)\n\n def __hash__(self) -> int:\n return hash(self.pattern)\n\n def __lt__(self, other: \"URLPattern\") -> bool:\n return self.priority < other.priority\n\n def __eq__(self, other: typing.Any) -> bool:\n return isinstance(other, URLPattern) and self.pattern == other.pattern\n", "path": "httpx/_utils.py" }, { "content": "import pytest\n\nimport httpx\n\n\n@pytest.mark.parametrize(\n \"source\",\n [\n \"a=123&a=456&b=789\",\n {\"a\": [\"123\", \"456\"], \"b\": 789},\n {\"a\": (\"123\", \"456\"), \"b\": 789},\n [(\"a\", \"123\"), (\"a\", \"456\"), (\"b\", \"789\")],\n ((\"a\", \"123\"), (\"a\", \"456\"), (\"b\", \"789\")),\n ],\n)\ndef test_queryparams(source):\n q = httpx.QueryParams(source)\n assert \"a\" in q\n assert \"A\" not in q\n assert \"c\" not in q\n assert q[\"a\"] == \"123\"\n assert q.get(\"a\") == \"123\"\n assert q.get(\"nope\", default=None) is None\n assert q.get_list(\"a\") == [\"123\", \"456\"]\n\n assert list(q.keys()) == [\"a\", \"b\"]\n assert list(q.values()) == [\"123\", \"789\"]\n assert list(q.items()) == [(\"a\", \"123\"), (\"b\", \"789\")]\n assert len(q) == 2\n assert list(q) == [\"a\", \"b\"]\n assert dict(q) == {\"a\": \"123\", \"b\": \"789\"}\n assert str(q) == \"a=123&a=456&b=789\"\n assert repr(q) == \"QueryParams('a=123&a=456&b=789')\"\n assert httpx.QueryParams({\"a\": \"123\", \"b\": \"456\"}) == httpx.QueryParams(\n [(\"a\", \"123\"), (\"b\", \"456\")]\n )\n assert httpx.QueryParams({\"a\": \"123\", \"b\": \"456\"}) == httpx.QueryParams(\n \"a=123&b=456\"\n )\n assert httpx.QueryParams({\"a\": \"123\", \"b\": \"456\"}) == httpx.QueryParams(\n {\"b\": \"456\", \"a\": \"123\"}\n )\n assert httpx.QueryParams() == httpx.QueryParams({})\n assert httpx.QueryParams([(\"a\", \"123\"), (\"a\", \"456\")]) == httpx.QueryParams(\n \"a=123&a=456\"\n )\n assert httpx.QueryParams({\"a\": \"123\", \"b\": \"456\"}) != \"invalid\"\n\n q = httpx.QueryParams([(\"a\", \"123\"), (\"a\", \"456\")])\n assert httpx.QueryParams(q) == q\n\n\ndef test_queryparam_types():\n q = httpx.QueryParams(None)\n assert str(q) == \"\"\n\n q = httpx.QueryParams({\"a\": True})\n assert str(q) == \"a=true\"\n\n q = httpx.QueryParams({\"a\": False})\n assert str(q) == \"a=false\"\n\n q = httpx.QueryParams({\"a\": \"\"})\n assert str(q) == \"a=\"\n\n q = httpx.QueryParams({\"a\": None})\n assert str(q) == \"a=\"\n\n q = httpx.QueryParams({\"a\": 1.23})\n assert str(q) == \"a=1.23\"\n\n q = httpx.QueryParams({\"a\": 123})\n assert str(q) == \"a=123\"\n\n q = httpx.QueryParams({\"a\": [1, 2]})\n assert str(q) == \"a=1&a=2\"\n\n\ndef test_empty_query_params():\n q = httpx.QueryParams({\"a\": \"\"})\n assert str(q) == \"a=\"\n\n q = httpx.QueryParams(\"a=\")\n assert str(q) == \"a=\"\n\n q = httpx.QueryParams(\"a\")\n assert str(q) == \"a=\"\n\n\ndef test_queryparam_update_is_hard_deprecated():\n q = httpx.QueryParams(\"a=123\")\n with pytest.raises(RuntimeError):\n q.update({\"a\": \"456\"})\n\n\ndef test_queryparam_setter_is_hard_deprecated():\n q = httpx.QueryParams(\"a=123\")\n with pytest.raises(RuntimeError):\n q[\"a\"] = \"456\"\n\n\ndef test_queryparam_set():\n q = httpx.QueryParams(\"a=123\")\n q = q.set(\"a\", \"456\")\n assert q == httpx.QueryParams(\"a=456\")\n\n\ndef test_queryparam_add():\n q = httpx.QueryParams(\"a=123\")\n q = q.add(\"a\", \"456\")\n assert q == httpx.QueryParams(\"a=123&a=456\")\n\n\ndef test_queryparam_remove():\n q = httpx.QueryParams(\"a=123\")\n q = q.remove(\"a\")\n assert q == httpx.QueryParams(\"\")\n\n\ndef test_queryparam_merge():\n q = httpx.QueryParams(\"a=123\")\n q = q.merge({\"b\": \"456\"})\n assert q == httpx.QueryParams(\"a=123&b=456\")\n q = q.merge({\"a\": \"000\", \"c\": \"789\"})\n assert q == httpx.QueryParams(\"a=000&b=456&c=789\")\n\n\ndef test_queryparams_are_hashable():\n params = (\n httpx.QueryParams(\"a=123\"),\n httpx.QueryParams({\"a\": 123}),\n httpx.QueryParams(\"b=456\"),\n httpx.QueryParams({\"b\": 456}),\n )\n\n assert len(set(params)) == 2\n", "path": "tests/models/test_queryparams.py" } ]
https://github.com/teamqurrent/httpx
10a3b68
sniffio rfc3986 httpcore>=0.18.0,<0.19.0 certifi idna
diff --git a/httpx/_utils.py b/httpx/_utils.py --- a/httpx/_utils.py +++ b/httpx/_utils.py @@ -67,7 +67,11 @@ def primitive_value_to_str(value: "PrimitiveData") -> str: return "false" elif value is None: return "" - return str(value) + elif isinstance(value, (str, float, int)): + return str(value) + raise TypeError( + f"Expected str, int, float, bool, or None. Got {type(value).__name__!r}." + ) def is_known_encoding(encoding: str) -> bool: diff --git a/tests/models/test_queryparams.py b/tests/models/test_queryparams.py --- a/tests/models/test_queryparams.py +++ b/tests/models/test_queryparams.py @@ -87,6 +87,13 @@ def test_empty_query_params(): assert str(q) == "a=" +def test_invalid_query_params(): + with pytest.raises( + TypeError, match=r"Expected str, int, float, bool, or None. Got 'bytes'." + ): + httpx.QueryParams({"a": b"bytes"}) + + def test_queryparam_update_is_hard_deprecated(): q = httpx.QueryParams("a=123") with pytest.raises(RuntimeError):
0_2
4cbf13e
python
import sys import unittest class TestHttpxQueryParams(unittest.TestCase): def test_query_params_with_bytes(self): import httpx try: httpx.QueryParams({"a": b"bytes"}) self.fail("TypeError not raised") except TypeError as e: expected_message = "Expected str, int, float, bool, or None. Got 'bytes'" self.assertIn(expected_message, str(e), "TypeError does not contain the expected message") def main(): suite = unittest.TestSuite() suite.addTests(unittest.TestLoader().loadTestsFromTestCase(TestHttpxQueryParams)) runner = unittest.TextTestRunner() if runner.run(suite).wasSuccessful(): sys.exit(0) else: sys.exit(1) if __name__ == "__main__": main()
The `primitive_value_to_str` function inside `httpx/_utils.py` returns 'true' or 'false' or '' when the value is boolean or None. It returns str(value) otherwise. Modify primitive_value_to_str function to return str(value) if value is of type str, float or int. Otherwise raise TypeError with the error message: 'Expected str, int, float, bool, or None. Got '{type}''. Update the file tests/models/test_queryparams.py to add a new test test_invalid_query_params() which expects a TypeError when bytes is passed.
python3.9
[ { "content": "import sys\n\nfrom setuptools import setup\n\nsys.stderr.write(\n \"\"\"\n===============================\nUnsupported installation method\n===============================\nhttpx no longer supports installation with `python setup.py install`.\nPlease use `python -m pip install .` instead.\n\"\"\"\n)\nsys.exit(1)\n\n\n# The below code will never execute, however GitHub is particularly\n# picky about where it finds Python packaging metadata.\n# See: https://github.com/github/feedback/discussions/6456\n#\n# To be removed once GitHub catches up.\n\nsetup(\n name=\"httpx\",\n install_requires=[\n \"certifi\",\n \"sniffio\",\n \"rfc3986[idna2008]>=1.3,<2\",\n \"httpcore>=0.15.0,<0.17.0\",\n ],\n)\n", "path": "setup.py" } ]
https://github.com/teamqurrent/httpx
e5bc1ea
diff --git a/setup.py b/setup.py deleted file mode 100644 --- a/setup.py +++ /dev/null @@ -1,31 +0,0 @@ -import sys - -from setuptools import setup - -sys.stderr.write( - """ -=============================== -Unsupported installation method -=============================== -httpx no longer supports installation with `python setup.py install`. -Please use `python -m pip install .` instead. -""" -) -sys.exit(1) - - -# The below code will never execute, however GitHub is particularly -# picky about where it finds Python packaging metadata. -# See: https://github.com/github/feedback/discussions/6456 -# -# To be removed once GitHub catches up. - -setup( - name="httpx", - install_requires=[ - "certifi", - "sniffio", - "rfc3986[idna2008]>=1.3,<2", - "httpcore>=0.15.0,<0.17.0", - ], -)
0_3
10a3b68
python
import os import sys import unittest class TestSetupPyExists(unittest.TestCase): def test_setup_py_existence(self): # Get the current directory path directory_path = os.getcwd() # List all files in the directory files = os.listdir(directory_path) # Check if setup.py exists in the list of files self.assertNotIn("setup.py", files, "setup.py exists in the directory") def main(): suite = unittest.TestSuite() suite.addTests(unittest.TestLoader().loadTestsFromTestCase(TestSetupPyExists)) runner = unittest.TextTestRunner() if runner.run(suite).wasSuccessful(): sys.exit(0) else: sys.exit(1) if __name__ == "__main__": main()
Delete `setup.py`
python3.9
[ { "content": "# Changelog\n\nAll notable changes to this project will be documented in this file.\n\nThe format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).\n\n## 0.25.0 (11th Sep, 2023)\n\n### Removed\n\n* Drop support for Python 3.7. (#2813)\n\n### Added\n\n* Support HTTPS proxies. (#2845)\n* Change the type of `Extensions` from `Mapping[Str, Any]` to `MutableMapping[Str, Any]`. (#2803)\n* Add `socket_options` argument to `httpx.HTTPTransport` and `httpx.AsyncHTTPTransport` classes. (#2716)\n* The `Response.raise_for_status()` method now returns the response instance. For example: `data = httpx.get('...').raise_for_status().json()`. (#2776)\n\n### Fixed\n\n* Return `500` error response instead of exceptions when `raise_app_exceptions=False` is set on `ASGITransport`. (#2669)\n* Ensure all `WSGITransport` environs have a `SERVER_PROTOCOL`. (#2708)\n* Always encode forward slashes as `%2F` in query parameters (#2723)\n* Use Mozilla documentation instead of `httpstatuses.com` for HTTP error reference (#2768)\n\n## 0.24.1 (17th May, 2023)\n\n### Added\n\n* Provide additional context in some `InvalidURL` exceptions. (#2675)\n\n### Fixed\n\n* Fix optional percent-encoding behaviour. (#2671)\n* More robust checking for opening upload files in binary mode. (#2630)\n* Properly support IP addresses in `NO_PROXY` environment variable. (#2659)\n* Set default file for `NetRCAuth()` to `None` to use the stdlib default. (#2667)\n* Set logging request lines to INFO level for async requests, in line with sync requests. (#2656)\n* Fix which gen-delims need to be escaped for path/query/fragment components in URL. (#2701)\n\n## 0.24.0 (6th April, 2023)\n\n### Changed\n\n* The logging behaviour has been changed to be more in-line with other standard Python logging usages. We no longer have a custom `TRACE` log level, and we no longer use the `HTTPX_LOG_LEVEL` environment variable to auto-configure logging. We now have a significant amount of `DEBUG` logging available at the network level. Full documentation is available at https://www.python-httpx.org/logging/ (#2547, encode/httpcore#648)\n* The `Response.iter_lines()` method now matches the stdlib behaviour and does not include the newline characters. It also resolves a performance issue. (#2423)\n* Query parameter encoding switches from using + for spaces and %2F for forward slash, to instead using %20 for spaces and treating forward slash as a safe, unescaped character. This differs from `requests`, but is in line with browser behavior in Chrome, Safari, and Firefox. Both options are RFC valid. (#2543)\n* NetRC authentication is no longer automatically handled, but is instead supported by an explicit `httpx.NetRCAuth()` authentication class. See the documentation at https://www.python-httpx.org/advanced/#netrc-support (#2525)\n\n### Removed\n\n* The `rfc3986` dependancy has been removed. (#2252)\n\n## 0.23.3 (4th Jan, 2023)\n\n### Fixed\n\n* Version 0.23.2 accidentally included stricter type checking on query parameters. This shouldn've have been included in a minor version bump, and is now reverted. (#2523, #2539)\n\n## 0.23.2 (2nd Jan, 2023)\n\n### Added\n\n* Support digest auth nonce counting to avoid multiple auth requests. (#2463)\n\n### Fixed\n\n* Multipart file uploads where the file length cannot be determine now use chunked transfer encoding, rather than loading the entire file into memory in order to determine the `Content-Length`. (#2382)\n* Raise `TypeError` if content is passed a dict-instance. (#2495)\n* Partially revert the API breaking change in 0.23.1, which removed `RawURL`. We continue to expose a `url.raw` property which is now a plain named-tuple. This API is still expected to be deprecated, but we will do so with a major version bump. (#2481)\n\n## 0.23.1 (18th Nov, 2022)\n\n**Note**: The 0.23.1 release should have used a proper version bump, rather than a minor point release.\nThere are API surface area changes that may affect some users.\nSee the \"Removed\" section of these release notes for details.\n\n### Added\n\n* Support for Python 3.11. (#2420)\n* Allow setting an explicit multipart boundary in `Content-Type` header. (#2278)\n* Allow `tuple` or `list` for multipart values, not just `list`. (#2355)\n* Allow `str` content for multipart upload files. (#2400)\n* Support connection upgrades. See https://www.encode.io/httpcore/extensions/#upgrade-requests\n\n### Fixed\n\n* Don't drop empty query parameters. (#2354)\n\n### Removed\n\n* Upload files *must* always be opened in binary mode. (#2400)\n* Drop `.read`/`.aread` from `SyncByteStream`/`AsyncByteStream`. (#2407)\n* Drop `RawURL`. (#2241)\n\n## 0.23.0 (23rd May, 2022)\n\n### Changed\n\n* Drop support for Python 3.6. (#2097)\n* Use `utf-8` as the default character set, instead of falling back to `charset-normalizer` for auto-detection. To enable automatic character set detection, see [the documentation](https://www.python-httpx.org/advanced/#character-set-encodings-and-auto-detection). (#2165)\n\n### Fixed\n\n* Fix `URL.copy_with` for some oddly formed URL cases. (#2185)\n* Digest authentication should use case-insensitive comparison for determining which algorithm is being used. (#2204)\n* Fix console markup escaping in command line client. (#1866)\n* When files are used in multipart upload, ensure we always seek to the start of the file. (#2065)\n* Ensure that `iter_bytes` never yields zero-length chunks. (#2068)\n* Preserve `Authorization` header for redirects that are to the same origin, but are an `http`-to-`https` upgrade. (#2074)\n* When responses have binary output, don't print the output to the console in the command line client. Use output like `<16086 bytes of binary data>` instead. (#2076)\n* Fix display of `--proxies` argument in the command line client help. (#2125)\n* Close responses when task cancellations occur during stream reading. (#2156)\n* Fix type error on accessing `.request` on `HTTPError` exceptions. (#2158)\n\n## 0.22.0 (26th January, 2022)\n\n### Added\n\n* Support for [the SOCKS5 proxy protocol](https://www.python-httpx.org/advanced/#socks) via [the `socksio` package](https://github.com/sethmlarson/socksio). (#2034)\n* Support for custom headers in multipart/form-data requests (#1936)\n\n### Fixed\n\n* Don't perform unreliable close/warning on `__del__` with unclosed clients. (#2026)\n* Fix `Headers.update(...)` to correctly handle repeated headers (#2038)\n\n## 0.21.3 (6th January, 2022)\n\n### Fixed\n\n* Fix streaming uploads using `SyncByteStream` or `AsyncByteStream`. Regression in 0.21.2. (#2016)\n\n## 0.21.2 (5th January, 2022)\n\n### Fixed\n\n* HTTP/2 support for tunnelled proxy cases. (#2009)\n* Improved the speed of large file uploads. (#1948)\n\n## 0.21.1 (16th November, 2021)\n\n### Fixed\n\n* The `response.url` property is now correctly annotated as `URL`, instead of `Optional[URL]`. (#1940)\n\n## 0.21.0 (15th November, 2021)\n\nThe 0.21.0 release integrates against a newly redesigned `httpcore` backend.\n\nBoth packages ought to automatically update to the required versions, but if you are\nseeing any issues, you should ensure that you have `httpx==0.21.*` and `httpcore==0.14.*` installed.\n\n### Added\n\n* The command-line client will now display connection information when `-v/--verbose` is used.\n* The command-line client will now display server certificate information when `-v/--verbose` is used.\n* The command-line client is now able to properly detect if the outgoing request\nshould be formatted as HTTP/1.1 or HTTP/2, based on the result of the HTTP/2 negotiation.\n\n### Removed\n\n* Curio support is no longer currently included. Please get in touch if you require this, so that we can assess priorities.\n\n## 0.20.0 (13th October, 2021)\n\nThe 0.20.0 release adds an integrated command-line client, and also includes some\ndesign changes. The most notable of these is that redirect responses are no longer\nautomatically followed, unless specifically requested.\n\nThis design decision prioritises a more explicit approach to redirects, in order\nto avoid code that unintentionally issues multiple requests as a result of\nmisconfigured URLs.\n\nFor example, previously a client configured to send requests to `http://api.github.com/`\nwould end up sending every API request twice, as each request would be redirected to `https://api.github.com/`.\n\nIf you do want auto-redirect behaviour, you can enable this either by configuring\nthe client instance with `Client(follow_redirects=True)`, or on a per-request\nbasis, with `.get(..., follow_redirects=True)`.\n\nThis change is a classic trade-off between convenience and precision, with no \"right\"\nanswer. See [discussion #1785](https://github.com/encode/httpx/discussions/1785) for more\ncontext.\n\nThe other major design change is an update to the Transport API, which is the low-level\ninterface against which requests are sent. Previously this interface used only primitive\ndatastructures, like so...\n\n```python\n(status_code, headers, stream, extensions) = transport.handle_request(method, url, headers, stream, extensions)\ntry\n ...\nfinally:\n stream.close()\n```\n\nNow the interface is much simpler...\n\n```python\nresponse = transport.handle_request(request)\ntry\n ...\nfinally:\n response.close()\n```\n\n### Changed\n\n* The `allow_redirects` flag is now `follow_redirects` and defaults to `False`.\n* The `raise_for_status()` method will now raise an exception for any responses\n except those with 2xx status codes. Previously only 4xx and 5xx status codes\n would result in an exception.\n* The low-level transport API changes to the much simpler `response = transport.handle_request(request)`.\n* The `client.send()` method no longer accepts a `timeout=...` argument, but the\n `client.build_request()` does. This required by the signature change of the\n Transport API. The request timeout configuration is now stored on the request\n instance, as `request.extensions['timeout']`.\n\n### Added\n\n* Added the `httpx` command-line client.\n* Response instances now include `.is_informational`, `.is_success`, `.is_redirect`, `.is_client_error`, and `.is_server_error`\n properties for checking 1xx, 2xx, 3xx, 4xx, and 5xx response types. Note that the behaviour of `.is_redirect` is slightly different in that it now returns True for all 3xx responses, in order to allow for a consistent set of properties onto the different HTTP status code types. The `response.has_redirect_location` location may be used to determine responses with properly formed URL redirects.\n\n### Fixed\n\n* `response.iter_bytes()` no longer raises a ValueError when called on a response with no content. (Pull #1827)\n* The `'wsgi.error'` configuration now defaults to `sys.stderr`, and is corrected to be a `TextIO` interface, not a `BytesIO` interface. Additionally, the WSGITransport now accepts a `wsgi_error` configuration. (Pull #1828)\n* Follow the WSGI spec by properly closing the iterable returned by the application. (Pull #1830)\n\n## 0.19.0 (19th August, 2021)\n\n### Added\n\n* Add support for `Client(allow_redirects=<bool>)`. (Pull #1790)\n* Add automatic character set detection, when no `charset` is included in the response `Content-Type` header. (Pull #1791)\n\n### Changed\n\n* Event hooks are now also called for any additional redirect or auth requests/responses. (Pull #1806)\n* Strictly enforce that upload files must be opened in binary mode. (Pull #1736)\n* Strictly enforce that client instances can only be opened and closed once, and cannot be re-opened. (Pull #1800)\n* Drop `mode` argument from `httpx.Proxy(..., mode=...)`. (Pull #1795)\n\n## 0.18.2 (17th June, 2021)\n\n### Added\n\n* Support for Python 3.10. (Pull #1687)\n* Expose `httpx.USE_CLIENT_DEFAULT`, used as the default to `auth` and `timeout` parameters in request methods. (Pull #1634)\n* Support [HTTP/2 \"prior knowledge\"](https://python-hyper.org/projects/hyper-h2/en/v2.3.1/negotiating-http2.html#prior-knowledge), using `httpx.Client(http1=False, http2=True)`. (Pull #1624)\n\n### Fixed\n\n* Clean up some cases where warnings were being issued. (Pull #1687)\n* Prefer Content-Length over Transfer-Encoding: chunked for content=<file-like> cases. (Pull #1619)\n\n## 0.18.1 (29th April, 2021)\n\n### Changed\n\n* Update brotli support to use the `brotlicffi` package (Pull #1605)\n* Ensure that `Request(..., stream=...)` does not auto-generate any headers on the request instance. (Pull #1607)\n\n### Fixed\n\n* Pass through `timeout=...` in top-level httpx.stream() function. (Pull #1613)\n* Map httpcore transport close exceptions to httpx exceptions. (Pull #1606)\n\n## 0.18.0 (27th April, 2021)\n\nThe 0.18.x release series formalises our low-level Transport API, introducing the base classes `httpx.BaseTransport` and `httpx.AsyncBaseTransport`.\n\nSee the \"[Writing custom transports](https://www.python-httpx.org/advanced/#writing-custom-transports)\" documentation and the [`httpx.BaseTransport.handle_request()`](https://github.com/encode/httpx/blob/397aad98fdc8b7580a5fc3e88f1578b4302c6382/httpx/_transports/base.py#L77-L147) docstring for more complete details on implementing custom transports.\n\nPull request #1522 includes a checklist of differences from the previous `httpcore` transport API, for developers implementing custom transports.\n\nThe following API changes have been issuing deprecation warnings since 0.17.0 onwards, and are now fully deprecated...\n\n* You should now use httpx.codes consistently instead of httpx.StatusCodes.\n* Use limits=... instead of pool_limits=....\n* Use proxies={\"http://\": ...} instead of proxies={\"http\": ...} for scheme-specific mounting.\n\n### Changed\n\n* Transport instances now inherit from `httpx.BaseTransport` or `httpx.AsyncBaseTransport`,\n and should implement either the `handle_request` method or `handle_async_request` method. (Pull #1522, #1550)\n* The `response.ext` property and `Response(ext=...)` argument are now named `extensions`. (Pull #1522)\n* The recommendation to not use `data=<bytes|str|bytes (a)iterator>` in favour of `content=<bytes|str|bytes (a)iterator>` has now been escalated to a deprecation warning. (Pull #1573)\n* Drop `Response(on_close=...)` from API, since it was a bit of leaking implementation detail. (Pull #1572)\n* When using a client instance, cookies should always be set on the client, rather than on a per-request basis. We prefer enforcing a stricter API here because it provides clearer expectations around cookie persistence, particularly when redirects occur. (Pull #1574)\n* The runtime exception `httpx.ResponseClosed` is now named `httpx.StreamClosed`. (#1584)\n* The `httpx.QueryParams` model now presents an immutable interface. There is a discussion on [the design and motivation here](https://github.com/encode/httpx/discussions/1599). Use `client.params = client.params.merge(...)` instead of `client.params.update(...)`. The basic query manipulation methods are `query.set(...)`, `query.add(...)`, and `query.remove()`. (#1600)\n\n### Added\n\n* The `Request` and `Response` classes can now be serialized using pickle. (#1579)\n* Handle `data={\"key\": [None|int|float|bool]}` cases. (Pull #1539)\n* Support `httpx.URL(**kwargs)`, for example `httpx.URL(scheme=\"https\", host=\"www.example.com\", path=\"/')`, or `httpx.URL(\"https://www.example.com/\", username=\"tom@gmail.com\", password=\"123 456\")`. (Pull #1601)\n* Support `url.copy_with(params=...)`. (Pull #1601)\n* Add `url.params` parameter, returning an immutable `QueryParams` instance. (Pull #1601)\n* Support query manipulation methods on the URL class. These are `url.copy_set_param()`, `url.copy_add_param()`, `url.copy_remove_param()`, `url.copy_merge_params()`. (Pull #1601)\n* The `httpx.URL` class now performs port normalization, so `:80` ports are stripped from `http` URLs and `:443` ports are stripped from `https` URLs. (Pull #1603)\n* The `URL.host` property returns unicode strings for internationalized domain names. The `URL.raw_host` property returns byte strings with IDNA escaping applied. (Pull #1590)\n\n### Fixed\n\n* Fix Content-Length for cases of `files=...` where unicode string is used as the file content. (Pull #1537)\n* Fix some cases of merging relative URLs against `Client(base_url=...)`. (Pull #1532)\n* The `request.content` attribute is now always available except for streaming content, which requires an explicit `.read()`. (Pull #1583)\n\n## 0.17.1 (March 15th, 2021)\n\n### Fixed\n\n* Type annotation on `CertTypes` allows `keyfile` and `password` to be optional. (Pull #1503)\n* Fix httpcore pinned version. (Pull #1495)\n\n## 0.17.0 (February 28th, 2021)\n\n### Added\n\n* Add `httpx.MockTransport()`, allowing to mock out a transport using pre-determined responses. (Pull #1401, Pull #1449)\n* Add `httpx.HTTPTransport()` and `httpx.AsyncHTTPTransport()` default transports. (Pull #1399)\n* Add mount API support, using `httpx.Client(mounts=...)`. (Pull #1362)\n* Add `chunk_size` parameter to `iter_raw()`, `iter_bytes()`, `iter_text()`. (Pull #1277)\n* Add `keepalive_expiry` parameter to `httpx.Limits()` configuration. (Pull #1398)\n* Add repr to `httpx.Cookies` to display available cookies. (Pull #1411)\n* Add support for `params=<tuple>` (previously only `params=<list>` was supported). (Pull #1426)\n\n### Fixed\n\n* Add missing `raw_path` to ASGI scope. (Pull #1357)\n* Tweak `create_ssl_context` defaults to use `trust_env=True`. (Pull #1447)\n* Properly URL-escape WSGI `PATH_INFO`. (Pull #1391)\n* Properly set default ports in WSGI transport. (Pull #1469)\n* Properly encode slashes when using `base_url`. (Pull #1407)\n* Properly map exceptions in `request.aclose()`. (Pull #1465)\n\n## 0.16.1 (October 8th, 2020)\n\n### Fixed\n\n* Support literal IPv6 addresses in URLs. (Pull #1349)\n* Force lowercase headers in ASGI scope dictionaries. (Pull #1351)\n\n## 0.16.0 (October 6th, 2020)\n\n### Changed\n\n* Preserve HTTP header casing. (Pull #1338, encode/httpcore#216, python-hyper/h11#104)\n* Drop `response.next()` and `response.anext()` methods in favour of `response.next_request` attribute. (Pull #1339)\n* Closed clients now raise a runtime error if attempting to send a request. (Pull #1346)\n\n### Added\n\n* Add Python 3.9 to officially supported versions.\n* Type annotate `__enter__`/`__exit__`/`__aenter__`/`__aexit__` in a way that supports subclasses of `Client` and `AsyncClient`. (Pull #1336)\n\n## 0.15.5 (October 1st, 2020)\n\n### Added\n\n* Add `response.next_request` (Pull #1334)\n\n## 0.15.4 (September 25th, 2020)\n\n### Added\n\n* Support direct comparisons between `Headers` and dicts or lists of two-tuples. Eg. `assert response.headers == {\"Content-Length\": 24}` (Pull #1326)\n\n### Fixed\n\n* Fix automatic `.read()` when `Response` instances are created with `content=<str>` (Pull #1324)\n\n## 0.15.3 (September 24th, 2020)\n\n### Fixed\n\n* Fixed connection leak in async client due to improper closing of response streams. (Pull #1316)\n\n## 0.15.2 (September 23nd, 2020)\n\n### Fixed\n\n* Fixed `response.elapsed` property. (Pull #1313)\n* Fixed client authentication interaction with `.stream()`. (Pull #1312)\n\n## 0.15.1 (September 23nd, 2020)\n\n### Fixed\n\n* ASGITransport now properly applies URL decoding to the `path` component, as-per the ASGI spec. (Pull #1307)\n\n## 0.15.0 (September 22nd, 2020)\n\n### Added\n\n* Added support for curio. (Pull https://github.com/encode/httpcore/pull/168)\n* Added support for event hooks. (Pull #1246)\n* Added support for authentication flows which require either sync or async I/O. (Pull #1217)\n* Added support for monitoring download progress with `response.num_bytes_downloaded`. (Pull #1268)\n* Added `Request(content=...)` for byte content, instead of overloading `Request(data=...)` (Pull #1266)\n* Added support for all URL components as parameter names when using `url.copy_with(...)`. (Pull #1285)\n* Neater split between automatically populated headers on `Request` instances, vs default `client.headers`. (Pull #1248)\n* Unclosed `AsyncClient` instances will now raise warnings if garbage collected. (Pull #1197)\n* Support `Response(content=..., text=..., html=..., json=...)` for creating usable response instances in code. (Pull #1265, #1297)\n* Support instantiating requests from the low-level transport API. (Pull #1293)\n* Raise errors on invalid URL types. (Pull #1259)\n\n### Changed\n\n* Cleaned up expected behaviour for URL escaping. `url.path` is now URL escaped. (Pull #1285)\n* Cleaned up expected behaviour for bytes vs str in URL components. `url.userinfo` and `url.query` are not URL escaped, and so return bytes. (Pull #1285)\n* Drop `url.authority` property in favour of `url.netloc`, since \"authority\" was semantically incorrect. (Pull #1285)\n* Drop `url.full_path` property in favour of `url.raw_path`, for better consistency with other parts of the API. (Pull #1285)\n* No longer use the `chardet` library for auto-detecting charsets, instead defaulting to a simpler approach when no charset is specified. (#1269)\n\n### Fixed\n\n* Swapped ordering of redirects and authentication flow. (Pull #1267)\n* `.netrc` lookups should use host, not host+port. (Pull #1298)\n\n### Removed\n\n* The `URLLib3Transport` class no longer exists. We've published it instead as an example of [a custom transport class](https://gist.github.com/florimondmanca/d56764d78d748eb9f73165da388e546e). (Pull #1182)\n* Drop `request.timer` attribute, which was being used internally to set `response.elapsed`. (Pull #1249)\n* Drop `response.decoder` attribute, which was being used internally. (Pull #1276)\n* `Request.prepare()` is now a private method. (Pull #1284)\n* The `Headers.getlist()` method had previously been deprecated in favour of `Headers.get_list()`. It is now fully removed.\n* The `QueryParams.getlist()` method had previously been deprecated in favour of `QueryParams.get_list()`. It is now fully removed.\n* The `URL.is_ssl` property had previously been deprecated in favour of `URL.scheme == \"https\"`. It is now fully removed.\n* The `httpx.PoolLimits` class had previously been deprecated in favour of `httpx.Limits`. It is now fully removed.\n* The `max_keepalive` setting had previously been deprecated in favour of the more explicit `max_keepalive_connections`. It is now fully removed.\n* The verbose `httpx.Timeout(5.0, connect_timeout=60.0)` style had previously been deprecated in favour of `httpx.Timeout(5.0, connect=60.0)`. It is now fully removed.\n* Support for instantiating a timeout config missing some defaults, such as `httpx.Timeout(connect=60.0)`, had previously been deprecated in favour of enforcing a more explicit style, such as `httpx.Timeout(5.0, connect=60.0)`. This is now strictly enforced.\n\n## 0.14.3 (September 2nd, 2020)\n\n### Added\n\n* `http.Response()` may now be instantiated without a `request=...` parameter. Useful for some unit testing cases. (Pull #1238)\n* Add `103 Early Hints` and `425 Too Early` status codes. (Pull #1244)\n\n### Fixed\n\n* `DigestAuth` now handles responses that include multiple 'WWW-Authenticate' headers. (Pull #1240)\n* Call into transport `__enter__`/`__exit__` or `__aenter__`/`__aexit__` when client is used in a context manager style. (Pull #1218)\n\n## 0.14.2 (August 24th, 2020)\n\n### Added\n\n* Support `client.get(..., auth=None)` to bypass the default authentication on a clients. (Pull #1115)\n* Support `client.auth = ...` property setter. (Pull #1185)\n* Support `httpx.get(..., proxies=...)` on top-level request functions. (Pull #1198)\n* Display instances with nicer import styles. (Eg. <httpx.ReadTimeout ...>) (Pull #1155)\n* Support `cookies=[(key, value)]` list-of-two-tuples style usage. (Pull #1211)\n\n### Fixed\n\n* Ensure that automatically included headers on a request may be modified. (Pull #1205)\n* Allow explicit `Content-Length` header on streaming requests. (Pull #1170)\n* Handle URL quoted usernames and passwords properly. (Pull #1159)\n* Use more consistent default for `HEAD` requests, setting `allow_redirects=True`. (Pull #1183)\n* If a transport error occurs while streaming the response, raise an `httpx` exception, not the underlying `httpcore` exception. (Pull #1190)\n* Include the underlying `httpcore` traceback, when transport exceptions occur. (Pull #1199)\n\n## 0.14.1 (August 11th, 2020)\n\n### Added\n\n* The `httpx.URL(...)` class now raises `httpx.InvalidURL` on invalid URLs, rather than exposing the underlying `rfc3986` exception. If a redirect response includes an invalid 'Location' header, then a `RemoteProtocolError` exception is raised, which will be associated with the request that caused it. (Pull #1163)\n\n### Fixed\n\n* Handling multiple `Set-Cookie` headers became broken in the 0.14.0 release, and is now resolved. (Pull #1156)\n\n## 0.14.0 (August 7th, 2020)\n\nThe 0.14 release includes a range of improvements to the public API, intended on preparing for our upcoming 1.0 release.\n\n* Our HTTP/2 support is now fully optional. **You now need to use `pip install httpx[http2]` if you want to include the HTTP/2 dependencies.**\n* Our HSTS support has now been removed. Rewriting URLs from `http` to `https` if the host is on the HSTS list can be beneficial in avoiding roundtrips to incorrectly formed URLs, but on balance we've decided to remove this feature, on the principle of least surprise. Most programmatic clients do not include HSTS support, and for now we're opting to remove our support for it.\n* Our exception hierarchy has been overhauled. Most users will want to stick with their existing `httpx.HTTPError` usage, but we've got a clearer overall structure now. See https://www.python-httpx.org/exceptions/ for more details.\n\nWhen upgrading you should be aware of the following public API changes. Note that deprecated usages will currently continue to function, but will issue warnings.\n\n* You should now use `httpx.codes` consistently instead of `httpx.StatusCodes`.\n* Usage of `httpx.Timeout()` should now always include an explicit default. Eg. `httpx.Timeout(None, pool=5.0)`.\n* When using `httpx.Timeout()`, we now have more concisely named keyword arguments. Eg. `read=5.0`, instead of `read_timeout=5.0`.\n* Use `httpx.Limits()` instead of `httpx.PoolLimits()`, and `limits=...` instead of `pool_limits=...`.\n* The `httpx.Limits(max_keepalive=...)` argument is now deprecated in favour of a more explicit `httpx.Limits(max_keepalive_connections=...)`.\n* Keys used with `Client(proxies={...})` should now be in the style of `{\"http://\": ...}`, rather than `{\"http\": ...}`.\n* The multidict methods `Headers.getlist()` and `QueryParams.getlist()` are deprecated in favour of more consistent `.get_list()` variants.\n* The `URL.is_ssl` property is deprecated in favour of `URL.scheme == \"https\"`.\n* The `URL.join(relative_url=...)` method is now `URL.join(url=...)`. This change does not support warnings for the deprecated usage style.\n\nOne notable aspect of the 0.14.0 release is that it tightens up the public API for `httpx`, by ensuring that several internal attributes and methods have now become strictly private.\n\nThe following previously had nominally public names on the client, but were all undocumented and intended solely for internal usage. They are all now replaced with underscored names, and should not be relied on or accessed.\n\nThese changes should not affect users who have been working from the `httpx` documentation.\n\n* `.merge_url()`, `.merge_headers()`, `.merge_cookies()`, `.merge_queryparams()`\n* `.build_auth()`, `.build_redirect_request()`\n* `.redirect_method()`, `.redirect_url()`, `.redirect_headers()`, `.redirect_stream()`\n* `.send_handling_redirects()`, `.send_handling_auth()`, `.send_single_request()`\n* `.init_transport()`, `.init_proxy_transport()`\n* `.proxies`, `.transport`, `.netrc`, `.get_proxy_map()`\n\nSee pull requests #997, #1065, #1071.\n\nSome areas of API which were already on the deprecation path, and were raising warnings or errors in 0.13.x have now been escalated to being fully removed.\n\n* Drop `ASGIDispatch`, `WSGIDispatch`, which have been replaced by `ASGITransport`, `WSGITransport`.\n* Drop `dispatch=...`` on client, which has been replaced by `transport=...``\n* Drop `soft_limit`, `hard_limit`, which have been replaced by `max_keepalive` and `max_connections`.\n* Drop `Response.stream` and` `Response.raw`, which have been replaced by ``.aiter_bytes` and `.aiter_raw`.\n* Drop `proxies=<transport instance>` in favor of `proxies=httpx.Proxy(...)`.\n\nSee pull requests #1057, #1058.\n\n### Added\n\n* Added dedicated exception class `httpx.HTTPStatusError` for `.raise_for_status()` exceptions. (Pull #1072)\n* Added `httpx.create_ssl_context()` helper function. (Pull #996)\n* Support for proxy exlcusions like `proxies={\"https://www.example.com\": None}`. (Pull #1099)\n* Support `QueryParams(None)` and `client.params = None`. (Pull #1060)\n\n### Changed\n\n* Use `httpx.codes` consistently in favour of `httpx.StatusCodes` which is placed into deprecation. (Pull #1088)\n* Usage of `httpx.Timeout()` should now always include an explicit default. Eg. `httpx.Timeout(None, pool=5.0)`. (Pull #1085)\n* Switch to more concise `httpx.Timeout()` keyword arguments. Eg. `read=5.0`, instead of `read_timeout=5.0`. (Pull #1111)\n* Use `httpx.Limits()` instead of `httpx.PoolLimits()`, and `limits=...` instead of `pool_limits=...`. (Pull #1113)\n* Keys used with `Client(proxies={...})` should now be in the style of `{\"http://\": ...}`, rather than `{\"http\": ...}`. (Pull #1127)\n* The multidict methods `Headers.getlist` and `QueryParams.getlist` are deprecated in favour of more consistent `.get_list()` variants. (Pull #1089)\n* `URL.port` becomes `Optional[int]`. Now only returns a port if one is explicitly included in the URL string. (Pull #1080)\n* The `URL(..., allow_relative=[bool])` parameter no longer exists. All URL instances may be relative. (Pull #1073)\n* Drop unnecessary `url.full_path = ...` property setter. (Pull #1069)\n* The `URL.join(relative_url=...)` method is now `URL.join(url=...)`. (Pull #1129)\n* The `URL.is_ssl` property is deprecated in favour of `URL.scheme == \"https\"`. (Pull #1128)\n\n### Fixed\n\n* Add missing `Response.next()` method. (Pull #1055)\n* Ensure all exception classes are exposed as public API. (Pull #1045)\n* Support multiple items with an identical field name in multipart encodings. (Pull #777)\n* Skip HSTS preloading on single-label domains. (Pull #1074)\n* Fixes for `Response.iter_lines()`. (Pull #1033, #1075)\n* Ignore permission errors when accessing `.netrc` files. (Pull #1104)\n* Allow bare hostnames in `HTTP_PROXY` etc... environment variables. (Pull #1120)\n* Settings `app=...` or `transport=...` bypasses any environment based proxy defaults. (Pull #1122)\n* Fix handling of `.base_url` when a path component is included in the base URL. (Pull #1130)\n\n---\n\n## 0.13.3 (May 29th, 2020)\n\n### Fixed\n\n* Include missing keepalive expiry configuration. (Pull #1005)\n* Improved error message when URL redirect has a custom scheme. (Pull #1002)\n\n## 0.13.2 (May 27th, 2020)\n\n### Fixed\n\n* Include explicit \"Content-Length: 0\" on POST, PUT, PATCH if no request body is used. (Pull #995)\n* Add `http2` option to `httpx.Client`. (Pull #982)\n* Tighten up API typing in places. (Pull #992, #999)\n\n## 0.13.1 (May 22nd, 2020)\n\n### Fixed\n\n* Fix pool options deprecation warning. (Pull #980)\n* Include `httpx.URLLib3ProxyTransport` in top-level API. (Pull #979)\n\n## 0.13.0 (May 22nd, 2020)\n\nThis release switches to `httpcore` for all the internal networking, which means:\n\n* We're using the same codebase for both our sync and async clients.\n* HTTP/2 support is now available with the sync client.\n* We no longer have a `urllib3` dependency for our sync client, although there is still an *optional* `URLLib3Transport` class.\n\nIt also means we've had to remove our UDS support, since maintaining that would have meant having to push back our work towards a 1.0 release, which isn't a trade-off we wanted to make.\n\nWe also now have [a public \"Transport API\"](https://www.python-httpx.org/advanced/#custom-transports), which you can use to implement custom transport implementations against. This formalises and replaces our previously private \"Dispatch API\".\n\n### Changed\n\n* Use `httpcore` for underlying HTTP transport. Drop `urllib3` requirement. (Pull #804, #967)\n* Rename pool limit options from `soft_limit`/`hard_limit` to `max_keepalive`/`max_connections`. (Pull #968)\n* The previous private \"Dispatch API\" has now been promoted to a public \"Transport API\". When customizing the transport use `transport=...`. The `ASGIDispatch` and `WSGIDispatch` class naming is deprecated in favour of `ASGITransport` and `WSGITransport`. (Pull #963)\n\n### Added\n\n* Added `URLLib3Transport` class for optional `urllib3` transport support. (Pull #804, #963)\n* Streaming multipart uploads. (Pull #857)\n* Logging via HTTPCORE_LOG_LEVEL and HTTPX_LOG_LEVEL environment variables\nand TRACE level logging. (Pull encode/httpcore#79)\n\n### Fixed\n\n* Performance improvement in brotli decoder. (Pull #906)\n* Proper warning level of deprecation notice in `Response.stream` and `Response.raw`. (Pull #908)\n* Fix support for generator based WSGI apps. (Pull #887)\n* Reuse of connections on HTTP/2 in close concurrency situations. (Pull encode/httpcore#81)\n* Honor HTTP/2 max concurrent streams settings (Pull encode/httpcore#89, encode/httpcore#90)\n* Fix bytes support in multipart uploads. (Pull #974)\n* Improve typing support for `files=...`. (Pull #976)\n\n### Removed\n\n* Dropped support for `Client(uds=...)` (Pull #804)\n\n## 0.13.0.dev2 (May 12th, 2020)\n\nThe 0.13.0.dev2 is a *pre-release* version. To install it, use `pip install httpx --pre`.\n\n### Added\n\n* Logging via HTTPCORE_LOG_LEVEL and HTTPX_LOG_LEVEL environment variables\nand TRACE level logging. (HTTPCore Pull #79)\n\n### Fixed\n\n* Reuse of connections on HTTP/2 in close concurrency situations. (HTTPCore Pull #81)\n* When using an `app=<ASGI app>` observe neater disconnect behaviour instead of sending empty body messages. (Pull #919)\n\n## 0.13.0.dev1 (May 6th, 2020)\n\nThe 0.13.0.dev1 is a *pre-release* version. To install it, use `pip install httpx --pre`.\n\n### Fixed\n\n* Passing `http2` flag to proxy dispatchers. (Pull #934)\n* Use [`httpcore` v0.8.3](https://github.com/encode/httpcore/releases/tag/0.8.3)\nwhich addresses problems in handling of headers when using proxies.\n\n## 0.13.0.dev0 (April 30th, 2020)\n\nThe 0.13.0.dev0 is a *pre-release* version. To install it, use `pip install httpx --pre`.\n\nThis release switches to `httpcore` for all the internal networking, which means:\n\n* We're using the same codebase for both our sync and async clients.\n* HTTP/2 support is now available with the sync client.\n* We no longer have a `urllib3` dependency for our sync client, although there is still an *optional* `URLLib3Dispatcher` class.\n\nIt also means we've had to remove our UDS support, since maintaining that would have meant having to push back our work towards a 1.0 release, which isn't a trade-off we wanted to make.\n\n### Changed\n\n* Use `httpcore` for underlying HTTP transport. Drop `urllib3` requirement. (Pull #804)\n\n### Added\n\n* Added `URLLib3Dispatcher` class for optional `urllib3` transport support. (Pull #804)\n* Streaming multipart uploads. (Pull #857)\n\n### Fixed\n\n* Performance improvement in brotli decoder. (Pull #906)\n* Proper warning level of deprecation notice in `Response.stream` and `Response.raw`. (Pull #908)\n* Fix support for generator based WSGI apps. (Pull #887)\n\n### Removed\n\n* Dropped support for `Client(uds=...)` (Pull #804)\n\n---\n\n## 0.12.1 (March 19th, 2020)\n\n### Fixed\n\n* Resolved packaging issue, where additional files were being included.\n\n## 0.12.0 (March 9th, 2020)\n\nThe 0.12 release tightens up the API expectations for `httpx` by switching to private module names to enforce better clarity around public API.\n\nAll imports of `httpx` should import from the top-level package only, such as `from httpx import Request`, rather than importing from privately namespaced modules such as `from httpx._models import Request`.\n\n### Added\n\n* Support making response body available to auth classes with `.requires_response_body`. (Pull #803)\n* Export `NetworkError` exception. (Pull #814)\n* Add support for `NO_PROXY` environment variable. (Pull #835)\n\n### Changed\n\n* Switched to private module names. (Pull #785)\n* Drop redirect looping detection and the `RedirectLoop` exception, instead using `TooManyRedirects`. (Pull #819)\n* Drop `backend=...` parameter on `AsyncClient`, in favour of always autodetecting `trio`/`asyncio`. (Pull #791)\n\n### Fixed\n\n* Support basic auth credentials in proxy URLs. (Pull #780)\n* Fix `httpx.Proxy(url, mode=\"FORWARD_ONLY\")` configuration. (Pull #788)\n* Fallback to setting headers as UTF-8 if no encoding is specified. (Pull #820)\n* Close proxy dispatches classes on client close. (Pull #826)\n* Support custom `cert` parameters even if `verify=False`. (Pull #796)\n* Don't support invalid dict-of-dicts form data in `data=...`. (Pull #811)\n\n---\n\n## 0.11.1 (January 17th, 2020)\n\n### Fixed\n\n* Fixed usage of `proxies=...` on `Client()`. (Pull #763)\n* Support both `zlib` and `deflate` style encodings on `Content-Encoding: deflate`. (Pull #758)\n* Fix for streaming a redirect response body with `allow_redirects=False`. (Pull #766)\n* Handle redirect with malformed Location headers missing host. (Pull #774)\n\n## 0.11.0 (January 9th, 2020)\n\nThe 0.11 release reintroduces our sync support, so that `httpx` now supports both a standard thread-concurrency API, and an async API.\n\nExisting async `httpx` users that are upgrading to 0.11 should ensure that:\n\n* Async codebases should always use a client instance to make requests, instead of the top-level API.\n* The async client is named as `httpx.AsyncClient()`, instead of `httpx.Client()`.\n* When instantiating proxy configurations use the `httpx.Proxy()` class, instead of the previous `httpx.HTTPProxy()`. This new configuration class works for configuring both sync and async clients.\n\nWe believe the API is now pretty much stable, and are aiming for a 1.0 release sometime on or before April 2020.\n\n### Changed\n\n- Top level API such as `httpx.get(url, ...)`, `httpx.post(url, ...)`, `httpx.request(method, url, ...)` becomes synchronous.\n- Added `httpx.Client()` for synchronous clients, with `httpx.AsyncClient` being used for async clients.\n- Switched to `proxies=httpx.Proxy(...)` for proxy configuration.\n- Network connection errors are wrapped in `httpx.NetworkError`, rather than exposing lower-level exception types directly.\n\n### Removed\n\n- The `request.url.origin` property and `httpx.Origin` class are no longer available.\n- The per-request `cert`, `verify`, and `trust_env` arguments are escalated from raising errors if used, to no longer being available. These arguments should be used on a per-client instance instead, or in the top-level API.\n- The `stream` argument has escalated from raising an error when used, to no longer being available. Use the `client.stream(...)` or `httpx.stream()` streaming API instead.\n\n### Fixed\n\n- Redirect loop detection matches against `(method, url)` rather than `url`. (Pull #734)\n\n---\n\n## 0.10.1 (December 31st, 2019)\n\n### Fixed\n\n- Fix issue with concurrent connection acquiry. (Pull #700)\n- Fix write error on closing HTTP/2 connections. (Pull #699)\n\n## 0.10.0 (December 29th, 2019)\n\nThe 0.10.0 release makes some changes that will allow us to support both sync and async interfaces.\n\nIn particular with streaming responses the `response.read()` method becomes `response.aread()`, and the `response.close()` method becomes `response.aclose()`.\n\nIf following redirects explicitly the `response.next()` method becomes `response.anext()`.\n\n### Fixed\n\n- End HTTP/2 streams immediately on no-body requests, rather than sending an empty body message. (Pull #682)\n- Improve typing for `Response.request`: switch from `Optional[Request]` to `Request`. (Pull #666)\n- `Response.elapsed` now reflects the entire download time. (Pull #687, #692)\n\n### Changed\n\n- Added `AsyncClient` as a synonym for `Client`. (Pull #680)\n- Switch to `response.aread()` for conditionally reading streaming responses. (Pull #674)\n- Switch to `response.aclose()` and `client.aclose()` for explicit closing. (Pull #674, #675)\n- Switch to `response.anext()` for resolving the next redirect response. (Pull #676)\n\n### Removed\n\n- When using a client instance, the per-request usage of `verify`, `cert`, and `trust_env` have now escalated from raising a warning to raising an error. You should set these arguments on the client instead. (Pull #617)\n- Removed the undocumented `request.read()`, since end users should not require it.\n\n---\n\n## 0.9.5 (December 20th, 2019)\n\n### Fixed\n\n- Fix Host header and HSTS rewrites when an explicit `:80` port is included in URL. (Pull #649)\n- Query Params on the URL string are merged with any `params=...` argument. (Pull #653)\n- More robust behavior when closing connections. (Pull #640)\n- More robust behavior when handling HTTP/2 headers with trailing whitespace. (Pull #637)\n- Allow any explicit `Content-Type` header to take precedence over the encoding default. (Pull #633)\n\n## 0.9.4 (December 12th, 2019)\n\n### Fixed\n\n- Added expiry to Keep-Alive connections, resolving issues with acquiring connections. (Pull #627)\n- Increased flow control windows on HTTP/2, resolving download speed issues. (Pull #629)\n\n## 0.9.3 (December 7th, 2019)\n\n### Fixed\n\n- Fixed HTTP/2 with autodetection backend. (Pull #614)\n\n## 0.9.2 (December 7th, 2019)\n\n* Released due to packaging build artifact.\n\n## 0.9.1 (December 6th, 2019)\n\n* Released due to packaging build artifact.\n\n## 0.9.0 (December 6th, 2019)\n\nThe 0.9 releases brings some major new features, including:\n\n* A new streaming API.\n* Autodetection of either asyncio or trio.\n* Nicer timeout configuration.\n* HTTP/2 support off by default, but can be enabled.\n\nWe've also removed all private types from the top-level package export.\n\nIn order to ensure you are only ever working with public API you should make\nsure to only import the top-level package eg. `import httpx`, rather than\nimporting modules within the package.\n\n### Added\n\n- Added concurrency backend autodetection. (Pull #585)\n- Added `Client(backend='trio')` and `Client(backend='asyncio')` API. (Pull #585)\n- Added `response.stream_lines()` API. (Pull #575)\n- Added `response.is_error` API. (Pull #574)\n- Added support for `timeout=Timeout(5.0, connect_timeout=60.0)` styles. (Pull #593)\n\n### Fixed\n\n- Requests or Clients with `timeout=None` now correctly always disable timeouts. (Pull #592)\n- Request 'Authorization' headers now have priority over `.netrc` authentication info. (Commit 095b691)\n- Files without a filename no longer set a Content-Type in multipart data. (Commit ed94950)\n\n### Changed\n\n- Added `httpx.stream()` API. Using `stream=True` now results in a warning. (Pull #600, #610)\n- HTTP/2 support is switched to \"off by default\", but can be enabled explicitly. (Pull #584)\n- Switched to `Client(http2=True)` API from `Client(http_versions=[\"HTTP/1.1\", \"HTTP/2\"])`. (Pull #586)\n- Removed all private types from the top-level package export. (Pull #608)\n- The SSL configuration settings of `verify`, `cert`, and `trust_env` now raise warnings if used per-request when using a Client instance. They should always be set on the Client instance itself. (Pull #597)\n- Use plain strings \"TUNNEL_ONLY\" or \"FORWARD_ONLY\" on the HTTPProxy `proxy_mode` argument. The `HTTPProxyMode` enum still exists, but its usage will raise warnings. (#610)\n- Pool timeouts are now on the timeout configuration, not the pool limits configuration. (Pull #563)\n- The timeout configuration is now named `httpx.Timeout(...)`, not `httpx.TimeoutConfig(...)`. The old version currently remains as a synonym for backwards compatibility. (Pull #591)\n\n---\n\n## 0.8.0 (November 27, 2019)\n\n### Removed\n\n- The synchronous API has been removed, in order to allow us to fundamentally change how we approach supporting both sync and async variants. (See #588 for more details.)\n\n---\n\n## 0.7.8 (November 17, 2019)\n\n### Added\n\n- Add support for proxy tunnels for Python 3.6 + asyncio. (Pull #521)\n\n## 0.7.7 (November 15, 2019)\n\n### Fixed\n\n- Resolve an issue with cookies behavior on redirect requests. (Pull #529)\n\n### Added\n\n- Add request/response DEBUG logs. (Pull #502)\n- Use TRACE log level for low level info. (Pull #500)\n\n## 0.7.6 (November 2, 2019)\n\n### Removed\n\n- Drop `proxies` parameter from the high-level API. (Pull #485)\n\n### Fixed\n\n- Tweak multipart files: omit null filenames, add support for `str` file contents. (Pull #482)\n- Cache NETRC authentication per-client. (Pull #400)\n- Rely on `getproxies` for all proxy environment variables. (Pull #470)\n- Wait for the `asyncio` stream to close when closing a connection. (Pull #494)\n\n## 0.7.5 (October 10, 2019)\n\n### Added\n\n- Allow lists of values to be passed to `params`. (Pull #386)\n- `ASGIDispatch`, `WSGIDispatch` are now available in the `httpx.dispatch` namespace. (Pull #407)\n- `HTTPError` is now available in the `httpx` namespace. (Pull #421)\n- Add support for `start_tls()` to the Trio concurrency backend. (Pull #467)\n\n### Fixed\n\n- Username and password are no longer included in the `Host` header when basic authentication\n credentials are supplied via the URL. (Pull #417)\n\n### Removed\n\n- The `.delete()` function no longer has `json`, `data`, or `files` parameters\n to match the expected semantics of the `DELETE` method. (Pull #408)\n- Removed the `trio` extra. Trio support is detected automatically. (Pull #390)\n\n## 0.7.4 (September 25, 2019)\n\n### Added\n\n- Add Trio concurrency backend. (Pull #276)\n- Add `params` parameter to `Client` for setting default query parameters. (Pull #372)\n- Add support for `SSL_CERT_FILE` and `SSL_CERT_DIR` environment variables. (Pull #307)\n- Add debug logging to calls into ASGI apps. (Pull #371)\n- Add debug logging to SSL configuration. (Pull #378)\n\n### Fixed\n\n- Fix a bug when using `Client` without timeouts in Python 3.6. (Pull #383)\n- Propagate `Client` configuration to HTTP proxies. (Pull #377)\n\n## 0.7.3 (September 20, 2019)\n\n### Added\n\n- HTTP Proxy support. (Pulls #259, #353)\n- Add Digest authentication. (Pull #332)\n- Add `.build_request()` method to `Client` and `AsyncClient`. (Pull #319)\n- Add `.elapsed` property on responses. (Pull #351)\n- Add support for `SSLKEYLOGFILE` in Python 3.8b4+. (Pull #301)\n\n### Removed\n\n- Drop NPN support for HTTP version negotiation. (Pull #314)\n\n### Fixed\n\n- Fix distribution of type annotations for mypy (Pull #361).\n- Set `Host` header when redirecting cross-origin. (Pull #321)\n- Drop `Content-Length` headers on `GET` redirects. (Pull #310)\n- Raise `KeyError` if header isn't found in `Headers`. (Pull #324)\n- Raise `NotRedirectResponse` in `response.next()` if there is no redirection to perform. (Pull #297)\n- Fix bug in calculating the HTTP/2 maximum frame size. (Pull #153)\n\n## 0.7.2 (August 28, 2019)\n\n- Enforce using `httpx.AsyncioBackend` for the synchronous client. (Pull #232)\n- `httpx.ConnectionPool` will properly release a dropped connection. (Pull #230)\n- Remove the `raise_app_exceptions` argument from `Client`. (Pull #238)\n- `DecodeError` will no longer be raised for an empty body encoded with Brotli. (Pull #237)\n- Added `http_versions` parameter to `Client`. (Pull #250)\n- Only use HTTP/1.1 on short-lived connections like `httpx.get()`. (Pull #284)\n- Convert `Client.cookies` and `Client.headers` when set as a property. (Pull #274)\n- Setting `HTTPX_DEBUG=1` enables debug logging on all requests. (Pull #277)\n\n## 0.7.1 (August 18, 2019)\n\n- Include files with source distribution to be installable. (Pull #233)\n\n## 0.7.0 (August 17, 2019)\n\n- Add the `trust_env` property to `BaseClient`. (Pull #187)\n- Add the `links` property to `BaseResponse`. (Pull #211)\n- Accept `ssl.SSLContext` instances into `SSLConfig(verify=...)`. (Pull #215)\n- Add `Response.stream_text()` with incremental encoding detection. (Pull #183)\n- Properly updated the `Host` header when a redirect changes the origin. (Pull #199)\n- Ignore invalid `Content-Encoding` headers. (Pull #196)\n- Use `~/.netrc` and `~/_netrc` files by default when `trust_env=True`. (Pull #189)\n- Create exception base class `HTTPError` with `request` and `response` properties. (Pull #162)\n- Add HSTS preload list checking within `BaseClient` to upgrade HTTP URLs to HTTPS. (Pull #184)\n- Switch IDNA encoding from IDNA 2003 to IDNA 2008. (Pull #161)\n- Expose base classes for alternate concurrency backends. (Pull #178)\n- Improve Multipart parameter encoding. (Pull #167)\n- Add the `headers` property to `BaseClient`. (Pull #159)\n- Add support for Google's `brotli` library. (Pull #156)\n- Remove deprecated TLS versions (TLSv1 and TLSv1.1) from default `SSLConfig`. (Pull #155)\n- Fix `URL.join(...)` to work similarly to RFC 3986 URL joining. (Pull #144)\n\n---\n\n## 0.6.8 (July 25, 2019)\n\n- Check for disconnections when searching for an available\n connection in `ConnectionPool.keepalive_connections` (Pull #145)\n- Allow string comparison for `URL` objects (Pull #139)\n- Add HTTP status codes 418 and 451 (Pull #135)\n- Add support for client certificate passwords (Pull #118)\n- Enable post-handshake client cert authentication for TLSv1.3 (Pull #118)\n- Disable using `commonName` for hostname checking for OpenSSL 1.1.0+ (Pull #118)\n- Detect encoding for `Response.json()` (Pull #116)\n\n## 0.6.7 (July 8, 2019)\n\n- Check for connection aliveness on re-acquiry (Pull #111)\n\n## 0.6.6 (July 3, 2019)\n\n- Improve `USER_AGENT` (Pull #110)\n- Add `Connection: keep-alive` by default to HTTP/1.1 connections. (Pull #110)\n\n## 0.6.5 (June 27, 2019)\n\n- Include `Host` header by default. (Pull #109)\n- Improve HTTP protocol detection. (Pull #107)\n\n## 0.6.4 (June 25, 2019)\n\n- Implement read and write timeouts (Pull #104)\n\n## 0.6.3 (June 24, 2019)\n\n- Handle early connection closes (Pull #103)\n\n## 0.6.2 (June 23, 2019)\n\n- Use urllib3's `DEFAULT_CIPHERS` for the `SSLConfig` object. (Pull #100)\n\n## 0.6.1 (June 21, 2019)\n\n- Add support for setting a `base_url` on the `Client`.\n\n## 0.6.0 (June 21, 2019)\n\n- Honor `local_flow_control_window` for HTTP/2 connections (Pull #98)\n", "path": "CHANGELOG.md" }, { "content": "import datetime\nimport email.message\nimport json as jsonlib\nimport typing\nimport urllib.request\nfrom collections.abc import Mapping\nfrom http.cookiejar import Cookie, CookieJar\n\nfrom ._content import ByteStream, UnattachedStream, encode_request, encode_response\nfrom ._decoders import (\n SUPPORTED_DECODERS,\n ByteChunker,\n ContentDecoder,\n IdentityDecoder,\n LineDecoder,\n MultiDecoder,\n TextChunker,\n TextDecoder,\n)\nfrom ._exceptions import (\n CookieConflict,\n HTTPStatusError,\n RequestNotRead,\n ResponseNotRead,\n StreamClosed,\n StreamConsumed,\n request_context,\n)\nfrom ._multipart import get_multipart_boundary_from_content_type\nfrom ._status_codes import codes\nfrom ._types import (\n AsyncByteStream,\n CookieTypes,\n HeaderTypes,\n QueryParamTypes,\n RequestContent,\n RequestData,\n RequestExtensions,\n RequestFiles,\n ResponseContent,\n ResponseExtensions,\n SyncByteStream,\n)\nfrom ._urls import URL\nfrom ._utils import (\n guess_json_utf,\n is_known_encoding,\n normalize_header_key,\n normalize_header_value,\n obfuscate_sensitive_headers,\n parse_content_type_charset,\n parse_header_links,\n)\n\n\nclass Headers(typing.MutableMapping[str, str]):\n \"\"\"\n HTTP headers, as a case-insensitive multi-dict.\n \"\"\"\n\n def __init__(\n self,\n headers: typing.Optional[HeaderTypes] = None,\n encoding: typing.Optional[str] = None,\n ) -> None:\n if headers is None:\n self._list = [] # type: typing.List[typing.Tuple[bytes, bytes, bytes]]\n elif isinstance(headers, Headers):\n self._list = list(headers._list)\n elif isinstance(headers, Mapping):\n self._list = [\n (\n normalize_header_key(k, lower=False, encoding=encoding),\n normalize_header_key(k, lower=True, encoding=encoding),\n normalize_header_value(v, encoding),\n )\n for k, v in headers.items()\n ]\n else:\n self._list = [\n (\n normalize_header_key(k, lower=False, encoding=encoding),\n normalize_header_key(k, lower=True, encoding=encoding),\n normalize_header_value(v, encoding),\n )\n for k, v in headers\n ]\n\n self._encoding = encoding\n\n @property\n def encoding(self) -> str:\n \"\"\"\n Header encoding is mandated as ascii, but we allow fallbacks to utf-8\n or iso-8859-1.\n \"\"\"\n if self._encoding is None:\n for encoding in [\"ascii\", \"utf-8\"]:\n for key, value in self.raw:\n try:\n key.decode(encoding)\n value.decode(encoding)\n except UnicodeDecodeError:\n break\n else:\n # The else block runs if 'break' did not occur, meaning\n # all values fitted the encoding.\n self._encoding = encoding\n break\n else:\n # The ISO-8859-1 encoding covers all 256 code points in a byte,\n # so will never raise decode errors.\n self._encoding = \"iso-8859-1\"\n return self._encoding\n\n @encoding.setter\n def encoding(self, value: str) -> None:\n self._encoding = value\n\n @property\n def raw(self) -> typing.List[typing.Tuple[bytes, bytes]]:\n \"\"\"\n Returns a list of the raw header items, as byte pairs.\n \"\"\"\n return [(raw_key, value) for raw_key, _, value in self._list]\n\n def keys(self) -> typing.KeysView[str]:\n return {key.decode(self.encoding): None for _, key, value in self._list}.keys()\n\n def values(self) -> typing.ValuesView[str]:\n values_dict: typing.Dict[str, str] = {}\n for _, key, value in self._list:\n str_key = key.decode(self.encoding)\n str_value = value.decode(self.encoding)\n if str_key in values_dict:\n values_dict[str_key] += f\", {str_value}\"\n else:\n values_dict[str_key] = str_value\n return values_dict.values()\n\n def items(self) -> typing.ItemsView[str, str]:\n \"\"\"\n Return `(key, value)` items of headers. Concatenate headers\n into a single comma separated value when a key occurs multiple times.\n \"\"\"\n values_dict: typing.Dict[str, str] = {}\n for _, key, value in self._list:\n str_key = key.decode(self.encoding)\n str_value = value.decode(self.encoding)\n if str_key in values_dict:\n values_dict[str_key] += f\", {str_value}\"\n else:\n values_dict[str_key] = str_value\n return values_dict.items()\n\n def multi_items(self) -> typing.List[typing.Tuple[str, str]]:\n \"\"\"\n Return a list of `(key, value)` pairs of headers. Allow multiple\n occurrences of the same key without concatenating into a single\n comma separated value.\n \"\"\"\n return [\n (key.decode(self.encoding), value.decode(self.encoding))\n for _, key, value in self._list\n ]\n\n def get(self, key: str, default: typing.Any = None) -> typing.Any:\n \"\"\"\n Return a header value. If multiple occurrences of the header occur\n then concatenate them together with commas.\n \"\"\"\n try:\n return self[key]\n except KeyError:\n return default\n\n def get_list(self, key: str, split_commas: bool = False) -> typing.List[str]:\n \"\"\"\n Return a list of all header values for a given key.\n If `split_commas=True` is passed, then any comma separated header\n values are split into multiple return strings.\n \"\"\"\n get_header_key = key.lower().encode(self.encoding)\n\n values = [\n item_value.decode(self.encoding)\n for _, item_key, item_value in self._list\n if item_key.lower() == get_header_key\n ]\n\n if not split_commas:\n return values\n\n split_values = []\n for value in values:\n split_values.extend([item.strip() for item in value.split(\",\")])\n return split_values\n\n def update(self, headers: typing.Optional[HeaderTypes] = None) -> None: # type: ignore\n headers = Headers(headers)\n for key in headers.keys():\n if key in self:\n self.pop(key)\n self._list.extend(headers._list)\n\n def copy(self) -> \"Headers\":\n return Headers(self, encoding=self.encoding)\n\n def __getitem__(self, key: str) -> str:\n \"\"\"\n Return a single header value.\n\n If there are multiple headers with the same key, then we concatenate\n them with commas. See: https://tools.ietf.org/html/rfc7230#section-3.2.2\n \"\"\"\n normalized_key = key.lower().encode(self.encoding)\n\n items = [\n header_value.decode(self.encoding)\n for _, header_key, header_value in self._list\n if header_key == normalized_key\n ]\n\n if items:\n return \", \".join(items)\n\n raise KeyError(key)\n\n def __setitem__(self, key: str, value: str) -> None:\n \"\"\"\n Set the header `key` to `value`, removing any duplicate entries.\n Retains insertion order.\n \"\"\"\n set_key = key.encode(self._encoding or \"utf-8\")\n set_value = value.encode(self._encoding or \"utf-8\")\n lookup_key = set_key.lower()\n\n found_indexes = [\n idx\n for idx, (_, item_key, _) in enumerate(self._list)\n if item_key == lookup_key\n ]\n\n for idx in reversed(found_indexes[1:]):\n del self._list[idx]\n\n if found_indexes:\n idx = found_indexes[0]\n self._list[idx] = (set_key, lookup_key, set_value)\n else:\n self._list.append((set_key, lookup_key, set_value))\n\n def __delitem__(self, key: str) -> None:\n \"\"\"\n Remove the header `key`.\n \"\"\"\n del_key = key.lower().encode(self.encoding)\n\n pop_indexes = [\n idx\n for idx, (_, item_key, _) in enumerate(self._list)\n if item_key.lower() == del_key\n ]\n\n if not pop_indexes:\n raise KeyError(key)\n\n for idx in reversed(pop_indexes):\n del self._list[idx]\n\n def __contains__(self, key: typing.Any) -> bool:\n header_key = key.lower().encode(self.encoding)\n return header_key in [key for _, key, _ in self._list]\n\n def __iter__(self) -> typing.Iterator[typing.Any]:\n return iter(self.keys())\n\n def __len__(self) -> int:\n return len(self._list)\n\n def __eq__(self, other: typing.Any) -> bool:\n try:\n other_headers = Headers(other)\n except ValueError:\n return False\n\n self_list = [(key, value) for _, key, value in self._list]\n other_list = [(key, value) for _, key, value in other_headers._list]\n return sorted(self_list) == sorted(other_list)\n\n def __repr__(self) -> str:\n class_name = self.__class__.__name__\n\n encoding_str = \"\"\n if self.encoding != \"ascii\":\n encoding_str = f\", encoding={self.encoding!r}\"\n\n as_list = list(obfuscate_sensitive_headers(self.multi_items()))\n as_dict = dict(as_list)\n\n no_duplicate_keys = len(as_dict) == len(as_list)\n if no_duplicate_keys:\n return f\"{class_name}({as_dict!r}{encoding_str})\"\n return f\"{class_name}({as_list!r}{encoding_str})\"\n\n\nclass Request:\n def __init__(\n self,\n method: typing.Union[str, bytes],\n url: typing.Union[\"URL\", str],\n *,\n params: typing.Optional[QueryParamTypes] = None,\n headers: typing.Optional[HeaderTypes] = None,\n cookies: typing.Optional[CookieTypes] = None,\n content: typing.Optional[RequestContent] = None,\n data: typing.Optional[RequestData] = None,\n files: typing.Optional[RequestFiles] = None,\n json: typing.Optional[typing.Any] = None,\n stream: typing.Union[SyncByteStream, AsyncByteStream, None] = None,\n extensions: typing.Optional[RequestExtensions] = None,\n ):\n self.method = (\n method.decode(\"ascii\").upper()\n if isinstance(method, bytes)\n else method.upper()\n )\n self.url = URL(url)\n if params is not None:\n self.url = self.url.copy_merge_params(params=params)\n self.headers = Headers(headers)\n self.extensions = {} if extensions is None else extensions\n\n if cookies:\n Cookies(cookies).set_cookie_header(self)\n\n if stream is None:\n content_type: typing.Optional[str] = self.headers.get(\"content-type\")\n headers, stream = encode_request(\n content=content,\n data=data,\n files=files,\n json=json,\n boundary=get_multipart_boundary_from_content_type(\n content_type=content_type.encode(self.headers.encoding)\n if content_type\n else None\n ),\n )\n self._prepare(headers)\n self.stream = stream\n # Load the request body, except for streaming content.\n if isinstance(stream, ByteStream):\n self.read()\n else:\n # There's an important distinction between `Request(content=...)`,\n # and `Request(stream=...)`.\n #\n # Using `content=...` implies automatically populated `Host` and content\n # headers, of either `Content-Length: ...` or `Transfer-Encoding: chunked`.\n #\n # Using `stream=...` will not automatically include *any* auto-populated headers.\n #\n # As an end-user you don't really need `stream=...`. It's only\n # useful when:\n #\n # * Preserving the request stream when copying requests, eg for redirects.\n # * Creating request instances on the *server-side* of the transport API.\n self.stream = stream\n\n def _prepare(self, default_headers: typing.Dict[str, str]) -> None:\n for key, value in default_headers.items():\n # Ignore Transfer-Encoding if the Content-Length has been set explicitly.\n if key.lower() == \"transfer-encoding\" and \"Content-Length\" in self.headers:\n continue\n self.headers.setdefault(key, value)\n\n auto_headers: typing.List[typing.Tuple[bytes, bytes]] = []\n\n has_host = \"Host\" in self.headers\n has_content_length = (\n \"Content-Length\" in self.headers or \"Transfer-Encoding\" in self.headers\n )\n\n if not has_host and self.url.host:\n auto_headers.append((b\"Host\", self.url.netloc))\n if not has_content_length and self.method in (\"POST\", \"PUT\", \"PATCH\"):\n auto_headers.append((b\"Content-Length\", b\"0\"))\n\n self.headers = Headers(auto_headers + self.headers.raw)\n\n @property\n def content(self) -> bytes:\n if not hasattr(self, \"_content\"):\n raise RequestNotRead()\n return self._content\n\n def read(self) -> bytes:\n \"\"\"\n Read and return the request content.\n \"\"\"\n if not hasattr(self, \"_content\"):\n assert isinstance(self.stream, typing.Iterable)\n self._content = b\"\".join(self.stream)\n if not isinstance(self.stream, ByteStream):\n # If a streaming request has been read entirely into memory, then\n # we can replace the stream with a raw bytes implementation,\n # to ensure that any non-replayable streams can still be used.\n self.stream = ByteStream(self._content)\n return self._content\n\n async def aread(self) -> bytes:\n \"\"\"\n Read and return the request content.\n \"\"\"\n if not hasattr(self, \"_content\"):\n assert isinstance(self.stream, typing.AsyncIterable)\n self._content = b\"\".join([part async for part in self.stream])\n if not isinstance(self.stream, ByteStream):\n # If a streaming request has been read entirely into memory, then\n # we can replace the stream with a raw bytes implementation,\n # to ensure that any non-replayable streams can still be used.\n self.stream = ByteStream(self._content)\n return self._content\n\n def __repr__(self) -> str:\n class_name = self.__class__.__name__\n url = str(self.url)\n return f\"<{class_name}({self.method!r}, {url!r})>\"\n\n def __getstate__(self) -> typing.Dict[str, typing.Any]:\n return {\n name: value\n for name, value in self.__dict__.items()\n if name not in [\"extensions\", \"stream\"]\n }\n\n def __setstate__(self, state: typing.Dict[str, typing.Any]) -> None:\n for name, value in state.items():\n setattr(self, name, value)\n self.extensions = {}\n self.stream = UnattachedStream()\n\n\nclass Response:\n def __init__(\n self,\n status_code: int,\n *,\n headers: typing.Optional[HeaderTypes] = None,\n content: typing.Optional[ResponseContent] = None,\n text: typing.Optional[str] = None,\n html: typing.Optional[str] = None,\n json: typing.Any = None,\n stream: typing.Union[SyncByteStream, AsyncByteStream, None] = None,\n request: typing.Optional[Request] = None,\n extensions: typing.Optional[ResponseExtensions] = None,\n history: typing.Optional[typing.List[\"Response\"]] = None,\n default_encoding: typing.Union[str, typing.Callable[[bytes], str]] = \"utf-8\",\n ):\n self.status_code = status_code\n self.headers = Headers(headers)\n\n self._request: typing.Optional[Request] = request\n\n # When follow_redirects=False and a redirect is received,\n # the client will set `response.next_request`.\n self.next_request: typing.Optional[Request] = None\n\n self.extensions: ResponseExtensions = {} if extensions is None else extensions\n self.history = [] if history is None else list(history)\n\n self.is_closed = False\n self.is_stream_consumed = False\n\n self.default_encoding = default_encoding\n\n if stream is None:\n headers, stream = encode_response(content, text, html, json)\n self._prepare(headers)\n self.stream = stream\n if isinstance(stream, ByteStream):\n # Load the response body, except for streaming content.\n self.read()\n else:\n # There's an important distinction between `Response(content=...)`,\n # and `Response(stream=...)`.\n #\n # Using `content=...` implies automatically populated content headers,\n # of either `Content-Length: ...` or `Transfer-Encoding: chunked`.\n #\n # Using `stream=...` will not automatically include any content headers.\n #\n # As an end-user you don't really need `stream=...`. It's only\n # useful when creating response instances having received a stream\n # from the transport API.\n self.stream = stream\n\n self._num_bytes_downloaded = 0\n\n def _prepare(self, default_headers: typing.Dict[str, str]) -> None:\n for key, value in default_headers.items():\n # Ignore Transfer-Encoding if the Content-Length has been set explicitly.\n if key.lower() == \"transfer-encoding\" and \"content-length\" in self.headers:\n continue\n self.headers.setdefault(key, value)\n\n @property\n def elapsed(self) -> datetime.timedelta:\n \"\"\"\n Returns the time taken for the complete request/response\n cycle to complete.\n \"\"\"\n if not hasattr(self, \"_elapsed\"):\n raise RuntimeError(\n \"'.elapsed' may only be accessed after the response \"\n \"has been read or closed.\"\n )\n return self._elapsed\n\n @elapsed.setter\n def elapsed(self, elapsed: datetime.timedelta) -> None:\n self._elapsed = elapsed\n\n @property\n def request(self) -> Request:\n \"\"\"\n Returns the request instance associated to the current response.\n \"\"\"\n if self._request is None:\n raise RuntimeError(\n \"The request instance has not been set on this response.\"\n )\n return self._request\n\n @request.setter\n def request(self, value: Request) -> None:\n self._request = value\n\n @property\n def http_version(self) -> str:\n try:\n http_version: bytes = self.extensions[\"http_version\"]\n except KeyError:\n return \"HTTP/1.1\"\n else:\n return http_version.decode(\"ascii\", errors=\"ignore\")\n\n @property\n def reason_phrase(self) -> str:\n try:\n reason_phrase: bytes = self.extensions[\"reason_phrase\"]\n except KeyError:\n return codes.get_reason_phrase(self.status_code)\n else:\n return reason_phrase.decode(\"ascii\", errors=\"ignore\")\n\n @property\n def url(self) -> URL:\n \"\"\"\n Returns the URL for which the request was made.\n \"\"\"\n return self.request.url\n\n @property\n def content(self) -> bytes:\n if not hasattr(self, \"_content\"):\n raise ResponseNotRead()\n return self._content\n\n @property\n def text(self) -> str:\n if not hasattr(self, \"_text\"):\n content = self.content\n if not content:\n self._text = \"\"\n else:\n decoder = TextDecoder(encoding=self.encoding or \"utf-8\")\n self._text = \"\".join([decoder.decode(self.content), decoder.flush()])\n return self._text\n\n @property\n def encoding(self) -> typing.Optional[str]:\n \"\"\"\n Return an encoding to use for decoding the byte content into text.\n The priority for determining this is given by...\n\n * `.encoding = <>` has been set explicitly.\n * The encoding as specified by the charset parameter in the Content-Type header.\n * The encoding as determined by `default_encoding`, which may either be\n a string like \"utf-8\" indicating the encoding to use, or may be a callable\n which enables charset autodetection.\n \"\"\"\n if not hasattr(self, \"_encoding\"):\n encoding = self.charset_encoding\n if encoding is None or not is_known_encoding(encoding):\n if isinstance(self.default_encoding, str):\n encoding = self.default_encoding\n elif hasattr(self, \"_content\"):\n encoding = self.default_encoding(self._content)\n self._encoding = encoding or \"utf-8\"\n return self._encoding\n\n @encoding.setter\n def encoding(self, value: str) -> None:\n self._encoding = value\n\n @property\n def charset_encoding(self) -> typing.Optional[str]:\n \"\"\"\n Return the encoding, as specified by the Content-Type header.\n \"\"\"\n content_type = self.headers.get(\"Content-Type\")\n if content_type is None:\n return None\n\n return parse_content_type_charset(content_type)\n\n def _get_content_decoder(self) -> ContentDecoder:\n \"\"\"\n Returns a decoder instance which can be used to decode the raw byte\n content, depending on the Content-Encoding used in the response.\n \"\"\"\n if not hasattr(self, \"_decoder\"):\n decoders: typing.List[ContentDecoder] = []\n values = self.headers.get_list(\"content-encoding\", split_commas=True)\n for value in values:\n value = value.strip().lower()\n try:\n decoder_cls = SUPPORTED_DECODERS[value]\n decoders.append(decoder_cls())\n except KeyError:\n continue\n\n if len(decoders) == 1:\n self._decoder = decoders[0]\n elif len(decoders) > 1:\n self._decoder = MultiDecoder(children=decoders)\n else:\n self._decoder = IdentityDecoder()\n\n return self._decoder\n\n @property\n def is_informational(self) -> bool:\n \"\"\"\n A property which is `True` for 1xx status codes, `False` otherwise.\n \"\"\"\n return codes.is_informational(self.status_code)\n\n @property\n def is_success(self) -> bool:\n \"\"\"\n A property which is `True` for 2xx status codes, `False` otherwise.\n \"\"\"\n return codes.is_success(self.status_code)\n\n @property\n def is_redirect(self) -> bool:\n \"\"\"\n A property which is `True` for 3xx status codes, `False` otherwise.\n\n Note that not all responses with a 3xx status code indicate a URL redirect.\n\n Use `response.has_redirect_location` to determine responses with a properly\n formed URL redirection.\n \"\"\"\n return codes.is_redirect(self.status_code)\n\n @property\n def is_client_error(self) -> bool:\n \"\"\"\n A property which is `True` for 4xx status codes, `False` otherwise.\n \"\"\"\n return codes.is_client_error(self.status_code)\n\n @property\n def is_server_error(self) -> bool:\n \"\"\"\n A property which is `True` for 5xx status codes, `False` otherwise.\n \"\"\"\n return codes.is_server_error(self.status_code)\n\n @property\n def is_error(self) -> bool:\n \"\"\"\n A property which is `True` for 4xx and 5xx status codes, `False` otherwise.\n \"\"\"\n return codes.is_error(self.status_code)\n\n @property\n def has_redirect_location(self) -> bool:\n \"\"\"\n Returns True for 3xx responses with a properly formed URL redirection,\n `False` otherwise.\n \"\"\"\n return (\n self.status_code\n in (\n # 301 (Cacheable redirect. Method may change to GET.)\n codes.MOVED_PERMANENTLY,\n # 302 (Uncacheable redirect. Method may change to GET.)\n codes.FOUND,\n # 303 (Client should make a GET or HEAD request.)\n codes.SEE_OTHER,\n # 307 (Equiv. 302, but retain method)\n codes.TEMPORARY_REDIRECT,\n # 308 (Equiv. 301, but retain method)\n codes.PERMANENT_REDIRECT,\n )\n and \"Location\" in self.headers\n )\n\n def raise_for_status(self) -> \"Response\":\n \"\"\"\n Raise the `HTTPStatusError` if one occurred.\n \"\"\"\n request = self._request\n if request is None:\n raise RuntimeError(\n \"Cannot call `raise_for_status` as the request \"\n \"instance has not been set on this response.\"\n )\n\n if self.is_success:\n return self\n\n if self.has_redirect_location:\n message = (\n \"{error_type} '{0.status_code} {0.reason_phrase}' for url '{0.url}'\\n\"\n \"Redirect location: '{0.headers[location]}'\\n\"\n \"For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/{0.status_code}\"\n )\n else:\n message = (\n \"{error_type} '{0.status_code} {0.reason_phrase}' for url '{0.url}'\\n\"\n \"For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/{0.status_code}\"\n )\n\n status_class = self.status_code // 100\n error_types = {\n 1: \"Informational response\",\n 3: \"Redirect response\",\n 4: \"Client error\",\n 5: \"Server error\",\n }\n error_type = error_types.get(status_class, \"Invalid status code\")\n message = message.format(self, error_type=error_type)\n raise HTTPStatusError(message, request=request, response=self)\n\n def json(self, **kwargs: typing.Any) -> typing.Any:\n if self.charset_encoding is None and self.content and len(self.content) > 3:\n encoding = guess_json_utf(self.content)\n if encoding is not None:\n return jsonlib.loads(self.content.decode(encoding), **kwargs)\n return jsonlib.loads(self.text, **kwargs)\n\n @property\n def cookies(self) -> \"Cookies\":\n if not hasattr(self, \"_cookies\"):\n self._cookies = Cookies()\n self._cookies.extract_cookies(self)\n return self._cookies\n\n @property\n def links(self) -> typing.Dict[typing.Optional[str], typing.Dict[str, str]]:\n \"\"\"\n Returns the parsed header links of the response, if any\n \"\"\"\n header = self.headers.get(\"link\")\n ldict = {}\n if header:\n links = parse_header_links(header)\n for link in links:\n key = link.get(\"rel\") or link.get(\"url\")\n ldict[key] = link\n return ldict\n\n @property\n def num_bytes_downloaded(self) -> int:\n return self._num_bytes_downloaded\n\n def __repr__(self) -> str:\n return f\"<Response [{self.status_code} {self.reason_phrase}]>\"\n\n def __getstate__(self) -> typing.Dict[str, typing.Any]:\n return {\n name: value\n for name, value in self.__dict__.items()\n if name not in [\"extensions\", \"stream\", \"is_closed\", \"_decoder\"]\n }\n\n def __setstate__(self, state: typing.Dict[str, typing.Any]) -> None:\n for name, value in state.items():\n setattr(self, name, value)\n self.is_closed = True\n self.extensions = {}\n self.stream = UnattachedStream()\n\n def read(self) -> bytes:\n \"\"\"\n Read and return the response content.\n \"\"\"\n if not hasattr(self, \"_content\"):\n self._content = b\"\".join(self.iter_bytes())\n return self._content\n\n def iter_bytes(\n self, chunk_size: typing.Optional[int] = None\n ) -> typing.Iterator[bytes]:\n \"\"\"\n A byte-iterator over the decoded response content.\n This allows us to handle gzip, deflate, and brotli encoded responses.\n \"\"\"\n if hasattr(self, \"_content\"):\n chunk_size = len(self._content) if chunk_size is None else chunk_size\n for i in range(0, len(self._content), max(chunk_size, 1)):\n yield self._content[i : i + chunk_size]\n else:\n decoder = self._get_content_decoder()\n chunker = ByteChunker(chunk_size=chunk_size)\n with request_context(request=self._request):\n for raw_bytes in self.iter_raw():\n decoded = decoder.decode(raw_bytes)\n for chunk in chunker.decode(decoded):\n yield chunk\n decoded = decoder.flush()\n for chunk in chunker.decode(decoded):\n yield chunk # pragma: no cover\n for chunk in chunker.flush():\n yield chunk\n\n def iter_text(\n self, chunk_size: typing.Optional[int] = None\n ) -> typing.Iterator[str]:\n \"\"\"\n A str-iterator over the decoded response content\n that handles both gzip, deflate, etc but also detects the content's\n string encoding.\n \"\"\"\n decoder = TextDecoder(encoding=self.encoding or \"utf-8\")\n chunker = TextChunker(chunk_size=chunk_size)\n with request_context(request=self._request):\n for byte_content in self.iter_bytes():\n text_content = decoder.decode(byte_content)\n for chunk in chunker.decode(text_content):\n yield chunk\n text_content = decoder.flush()\n for chunk in chunker.decode(text_content):\n yield chunk\n for chunk in chunker.flush():\n yield chunk\n\n def iter_lines(self) -> typing.Iterator[str]:\n decoder = LineDecoder()\n with request_context(request=self._request):\n for text in self.iter_text():\n for line in decoder.decode(text):\n yield line\n for line in decoder.flush():\n yield line\n\n def iter_raw(\n self, chunk_size: typing.Optional[int] = None\n ) -> typing.Iterator[bytes]:\n \"\"\"\n A byte-iterator over the raw response content.\n \"\"\"\n if self.is_stream_consumed:\n raise StreamConsumed()\n if self.is_closed:\n raise StreamClosed()\n if not isinstance(self.stream, SyncByteStream):\n raise RuntimeError(\"Attempted to call a sync iterator on an async stream.\")\n\n self.is_stream_consumed = True\n self._num_bytes_downloaded = 0\n chunker = ByteChunker(chunk_size=chunk_size)\n\n with request_context(request=self._request):\n for raw_stream_bytes in self.stream:\n self._num_bytes_downloaded += len(raw_stream_bytes)\n for chunk in chunker.decode(raw_stream_bytes):\n yield chunk\n\n for chunk in chunker.flush():\n yield chunk\n\n self.close()\n\n def close(self) -> None:\n \"\"\"\n Close the response and release the connection.\n Automatically called if the response body is read to completion.\n \"\"\"\n if not isinstance(self.stream, SyncByteStream):\n raise RuntimeError(\"Attempted to call an sync close on an async stream.\")\n\n if not self.is_closed:\n self.is_closed = True\n with request_context(request=self._request):\n self.stream.close()\n\n async def aread(self) -> bytes:\n \"\"\"\n Read and return the response content.\n \"\"\"\n if not hasattr(self, \"_content\"):\n self._content = b\"\".join([part async for part in self.aiter_bytes()])\n return self._content\n\n async def aiter_bytes(\n self, chunk_size: typing.Optional[int] = None\n ) -> typing.AsyncIterator[bytes]:\n \"\"\"\n A byte-iterator over the decoded response content.\n This allows us to handle gzip, deflate, and brotli encoded responses.\n \"\"\"\n if hasattr(self, \"_content\"):\n chunk_size = len(self._content) if chunk_size is None else chunk_size\n for i in range(0, len(self._content), max(chunk_size, 1)):\n yield self._content[i : i + chunk_size]\n else:\n decoder = self._get_content_decoder()\n chunker = ByteChunker(chunk_size=chunk_size)\n with request_context(request=self._request):\n async for raw_bytes in self.aiter_raw():\n decoded = decoder.decode(raw_bytes)\n for chunk in chunker.decode(decoded):\n yield chunk\n decoded = decoder.flush()\n for chunk in chunker.decode(decoded):\n yield chunk # pragma: no cover\n for chunk in chunker.flush():\n yield chunk\n\n async def aiter_text(\n self, chunk_size: typing.Optional[int] = None\n ) -> typing.AsyncIterator[str]:\n \"\"\"\n A str-iterator over the decoded response content\n that handles both gzip, deflate, etc but also detects the content's\n string encoding.\n \"\"\"\n decoder = TextDecoder(encoding=self.encoding or \"utf-8\")\n chunker = TextChunker(chunk_size=chunk_size)\n with request_context(request=self._request):\n async for byte_content in self.aiter_bytes():\n text_content = decoder.decode(byte_content)\n for chunk in chunker.decode(text_content):\n yield chunk\n text_content = decoder.flush()\n for chunk in chunker.decode(text_content):\n yield chunk\n for chunk in chunker.flush():\n yield chunk\n\n async def aiter_lines(self) -> typing.AsyncIterator[str]:\n decoder = LineDecoder()\n with request_context(request=self._request):\n async for text in self.aiter_text():\n for line in decoder.decode(text):\n yield line\n for line in decoder.flush():\n yield line\n\n async def aiter_raw(\n self, chunk_size: typing.Optional[int] = None\n ) -> typing.AsyncIterator[bytes]:\n \"\"\"\n A byte-iterator over the raw response content.\n \"\"\"\n if self.is_stream_consumed:\n raise StreamConsumed()\n if self.is_closed:\n raise StreamClosed()\n if not isinstance(self.stream, AsyncByteStream):\n raise RuntimeError(\"Attempted to call an async iterator on an sync stream.\")\n\n self.is_stream_consumed = True\n self._num_bytes_downloaded = 0\n chunker = ByteChunker(chunk_size=chunk_size)\n\n with request_context(request=self._request):\n async for raw_stream_bytes in self.stream:\n self._num_bytes_downloaded += len(raw_stream_bytes)\n for chunk in chunker.decode(raw_stream_bytes):\n yield chunk\n\n for chunk in chunker.flush():\n yield chunk\n\n await self.aclose()\n\n async def aclose(self) -> None:\n \"\"\"\n Close the response and release the connection.\n Automatically called if the response body is read to completion.\n \"\"\"\n if not isinstance(self.stream, AsyncByteStream):\n raise RuntimeError(\"Attempted to call an async close on an sync stream.\")\n\n if not self.is_closed:\n self.is_closed = True\n with request_context(request=self._request):\n await self.stream.aclose()\n\n\nclass Cookies(typing.MutableMapping[str, str]):\n \"\"\"\n HTTP Cookies, as a mutable mapping.\n \"\"\"\n\n def __init__(self, cookies: typing.Optional[CookieTypes] = None) -> None:\n if cookies is None or isinstance(cookies, dict):\n self.jar = CookieJar()\n if isinstance(cookies, dict):\n for key, value in cookies.items():\n self.set(key, value)\n elif isinstance(cookies, list):\n self.jar = CookieJar()\n for key, value in cookies:\n self.set(key, value)\n elif isinstance(cookies, Cookies):\n self.jar = CookieJar()\n for cookie in cookies.jar:\n self.jar.set_cookie(cookie)\n else:\n self.jar = cookies\n\n def extract_cookies(self, response: Response) -> None:\n \"\"\"\n Loads any cookies based on the response `Set-Cookie` headers.\n \"\"\"\n urllib_response = self._CookieCompatResponse(response)\n urllib_request = self._CookieCompatRequest(response.request)\n\n self.jar.extract_cookies(urllib_response, urllib_request) # type: ignore\n\n def set_cookie_header(self, request: Request) -> None:\n \"\"\"\n Sets an appropriate 'Cookie:' HTTP header on the `Request`.\n \"\"\"\n urllib_request = self._CookieCompatRequest(request)\n self.jar.add_cookie_header(urllib_request)\n\n def set(self, name: str, value: str, domain: str = \"\", path: str = \"/\") -> None:\n \"\"\"\n Set a cookie value by name. May optionally include domain and path.\n \"\"\"\n kwargs = {\n \"version\": 0,\n \"name\": name,\n \"value\": value,\n \"port\": None,\n \"port_specified\": False,\n \"domain\": domain,\n \"domain_specified\": bool(domain),\n \"domain_initial_dot\": domain.startswith(\".\"),\n \"path\": path,\n \"path_specified\": bool(path),\n \"secure\": False,\n \"expires\": None,\n \"discard\": True,\n \"comment\": None,\n \"comment_url\": None,\n \"rest\": {\"HttpOnly\": None},\n \"rfc2109\": False,\n }\n cookie = Cookie(**kwargs) # type: ignore\n self.jar.set_cookie(cookie)\n\n def get( # type: ignore\n self,\n name: str,\n default: typing.Optional[str] = None,\n domain: typing.Optional[str] = None,\n path: typing.Optional[str] = None,\n ) -> typing.Optional[str]:\n \"\"\"\n Get a cookie by name. May optionally include domain and path\n in order to specify exactly which cookie to retrieve.\n \"\"\"\n value = None\n for cookie in self.jar:\n if cookie.name == name:\n if domain is None or cookie.domain == domain:\n if path is None or cookie.path == path:\n if value is not None:\n message = f\"Multiple cookies exist with name={name}\"\n raise CookieConflict(message)\n value = cookie.value\n\n if value is None:\n return default\n return value\n\n def delete(\n self,\n name: str,\n domain: typing.Optional[str] = None,\n path: typing.Optional[str] = None,\n ) -> None:\n \"\"\"\n Delete a cookie by name. May optionally include domain and path\n in order to specify exactly which cookie to delete.\n \"\"\"\n if domain is not None and path is not None:\n return self.jar.clear(domain, path, name)\n\n remove = [\n cookie\n for cookie in self.jar\n if cookie.name == name\n and (domain is None or cookie.domain == domain)\n and (path is None or cookie.path == path)\n ]\n\n for cookie in remove:\n self.jar.clear(cookie.domain, cookie.path, cookie.name)\n\n def clear(\n self, domain: typing.Optional[str] = None, path: typing.Optional[str] = None\n ) -> None:\n \"\"\"\n Delete all cookies. Optionally include a domain and path in\n order to only delete a subset of all the cookies.\n \"\"\"\n args = []\n if domain is not None:\n args.append(domain)\n if path is not None:\n assert domain is not None\n args.append(path)\n self.jar.clear(*args)\n\n def update(self, cookies: typing.Optional[CookieTypes] = None) -> None: # type: ignore\n cookies = Cookies(cookies)\n for cookie in cookies.jar:\n self.jar.set_cookie(cookie)\n\n def __setitem__(self, name: str, value: str) -> None:\n return self.set(name, value)\n\n def __getitem__(self, name: str) -> str:\n value = self.get(name)\n if value is None:\n raise KeyError(name)\n return value\n\n def __delitem__(self, name: str) -> None:\n return self.delete(name)\n\n def __len__(self) -> int:\n return len(self.jar)\n\n def __iter__(self) -> typing.Iterator[str]:\n return (cookie.name for cookie in self.jar)\n\n def __bool__(self) -> bool:\n for _ in self.jar:\n return True\n return False\n\n def __repr__(self) -> str:\n cookies_repr = \", \".join(\n [\n f\"<Cookie {cookie.name}={cookie.value} for {cookie.domain} />\"\n for cookie in self.jar\n ]\n )\n\n return f\"<Cookies[{cookies_repr}]>\"\n\n class _CookieCompatRequest(urllib.request.Request):\n \"\"\"\n Wraps a `Request` instance up in a compatibility interface suitable\n for use with `CookieJar` operations.\n \"\"\"\n\n def __init__(self, request: Request) -> None:\n super().__init__(\n url=str(request.url),\n headers=dict(request.headers),\n method=request.method,\n )\n self.request = request\n\n def add_unredirected_header(self, key: str, value: str) -> None:\n super().add_unredirected_header(key, value)\n self.request.headers[key] = value\n\n class _CookieCompatResponse:\n \"\"\"\n Wraps a `Request` instance up in a compatibility interface suitable\n for use with `CookieJar` operations.\n \"\"\"\n\n def __init__(self, response: Response):\n self.response = response\n\n def info(self) -> email.message.Message:\n info = email.message.Message()\n for key, value in self.response.headers.multi_items():\n # Note that setting `info[key]` here is an \"append\" operation,\n # not a \"replace\" operation.\n # https://docs.python.org/3/library/email.compat32-message.html#email.message.Message.__setitem__\n info[key] = value\n return info\n", "path": "httpx/_models.py" }, { "content": "import json\nimport pickle\nimport typing\n\nimport chardet\nimport pytest\n\nimport httpx\n\n\nclass StreamingBody:\n def __iter__(self):\n yield b\"Hello, \"\n yield b\"world!\"\n\n\ndef streaming_body() -> typing.Iterator[bytes]:\n yield b\"Hello, \"\n yield b\"world!\"\n\n\nasync def async_streaming_body() -> typing.AsyncIterator[bytes]:\n yield b\"Hello, \"\n yield b\"world!\"\n\n\ndef autodetect(content):\n return chardet.detect(content).get(\"encoding\")\n\n\ndef test_response():\n response = httpx.Response(\n 200,\n content=b\"Hello, world!\",\n request=httpx.Request(\"GET\", \"https://example.org\"),\n )\n\n assert response.status_code == 200\n assert response.reason_phrase == \"OK\"\n assert response.text == \"Hello, world!\"\n assert response.request.method == \"GET\"\n assert response.request.url == \"https://example.org\"\n assert not response.is_error\n\n\ndef test_response_content():\n response = httpx.Response(200, content=\"Hello, world!\")\n\n assert response.status_code == 200\n assert response.reason_phrase == \"OK\"\n assert response.text == \"Hello, world!\"\n assert response.headers == {\"Content-Length\": \"13\"}\n\n\ndef test_response_text():\n response = httpx.Response(200, text=\"Hello, world!\")\n\n assert response.status_code == 200\n assert response.reason_phrase == \"OK\"\n assert response.text == \"Hello, world!\"\n assert response.headers == {\n \"Content-Length\": \"13\",\n \"Content-Type\": \"text/plain; charset=utf-8\",\n }\n\n\ndef test_response_html():\n response = httpx.Response(200, html=\"<html><body>Hello, world!</html></body>\")\n\n assert response.status_code == 200\n assert response.reason_phrase == \"OK\"\n assert response.text == \"<html><body>Hello, world!</html></body>\"\n assert response.headers == {\n \"Content-Length\": \"39\",\n \"Content-Type\": \"text/html; charset=utf-8\",\n }\n\n\ndef test_response_json():\n response = httpx.Response(200, json={\"hello\": \"world\"})\n\n assert response.status_code == 200\n assert response.reason_phrase == \"OK\"\n assert response.json() == {\"hello\": \"world\"}\n assert response.headers == {\n \"Content-Length\": \"18\",\n \"Content-Type\": \"application/json\",\n }\n\n\ndef test_raise_for_status():\n request = httpx.Request(\"GET\", \"https://example.org\")\n\n # 2xx status codes are not an error.\n response = httpx.Response(200, request=request)\n response.raise_for_status()\n\n # 1xx status codes are informational responses.\n response = httpx.Response(101, request=request)\n assert response.is_informational\n with pytest.raises(httpx.HTTPStatusError) as exc_info:\n response.raise_for_status()\n assert str(exc_info.value) == (\n \"Informational response '101 Switching Protocols' for url 'https://example.org'\\n\"\n \"For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/101\"\n )\n\n # 3xx status codes are redirections.\n headers = {\"location\": \"https://other.org\"}\n response = httpx.Response(303, headers=headers, request=request)\n assert response.is_redirect\n with pytest.raises(httpx.HTTPStatusError) as exc_info:\n response.raise_for_status()\n assert str(exc_info.value) == (\n \"Redirect response '303 See Other' for url 'https://example.org'\\n\"\n \"Redirect location: 'https://other.org'\\n\"\n \"For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/303\"\n )\n\n # 4xx status codes are a client error.\n response = httpx.Response(403, request=request)\n assert response.is_client_error\n assert response.is_error\n with pytest.raises(httpx.HTTPStatusError) as exc_info:\n response.raise_for_status()\n assert str(exc_info.value) == (\n \"Client error '403 Forbidden' for url 'https://example.org'\\n\"\n \"For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403\"\n )\n\n # 5xx status codes are a server error.\n response = httpx.Response(500, request=request)\n assert response.is_server_error\n assert response.is_error\n with pytest.raises(httpx.HTTPStatusError) as exc_info:\n response.raise_for_status()\n assert str(exc_info.value) == (\n \"Server error '500 Internal Server Error' for url 'https://example.org'\\n\"\n \"For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500\"\n )\n\n # Calling .raise_for_status without setting a request instance is\n # not valid. Should raise a runtime error.\n response = httpx.Response(200)\n with pytest.raises(RuntimeError):\n response.raise_for_status()\n\n\ndef test_response_repr():\n response = httpx.Response(\n 200,\n content=b\"Hello, world!\",\n )\n assert repr(response) == \"<Response [200 OK]>\"\n\n\ndef test_response_content_type_encoding():\n \"\"\"\n Use the charset encoding in the Content-Type header if possible.\n \"\"\"\n headers = {\"Content-Type\": \"text-plain; charset=latin-1\"}\n content = \"Latin 1: ÿ\".encode(\"latin-1\")\n response = httpx.Response(\n 200,\n content=content,\n headers=headers,\n )\n assert response.text == \"Latin 1: ÿ\"\n assert response.encoding == \"latin-1\"\n\n\ndef test_response_default_to_utf8_encoding():\n \"\"\"\n Default to utf-8 encoding if there is no Content-Type header.\n \"\"\"\n content = \"おはようございます。\".encode(\"utf-8\")\n response = httpx.Response(\n 200,\n content=content,\n )\n assert response.text == \"おはようございます。\"\n assert response.encoding == \"utf-8\"\n\n\ndef test_response_fallback_to_utf8_encoding():\n \"\"\"\n Fallback to utf-8 if we get an invalid charset in the Content-Type header.\n \"\"\"\n headers = {\"Content-Type\": \"text-plain; charset=invalid-codec-name\"}\n content = \"おはようございます。\".encode(\"utf-8\")\n response = httpx.Response(\n 200,\n content=content,\n headers=headers,\n )\n assert response.text == \"おはようございます。\"\n assert response.encoding == \"utf-8\"\n\n\ndef test_response_no_charset_with_ascii_content():\n \"\"\"\n A response with ascii encoded content should decode correctly,\n even with no charset specified.\n \"\"\"\n content = b\"Hello, world!\"\n headers = {\"Content-Type\": \"text/plain\"}\n response = httpx.Response(\n 200,\n content=content,\n headers=headers,\n )\n assert response.status_code == 200\n assert response.encoding == \"utf-8\"\n assert response.text == \"Hello, world!\"\n\n\ndef test_response_no_charset_with_utf8_content():\n \"\"\"\n A response with UTF-8 encoded content should decode correctly,\n even with no charset specified.\n \"\"\"\n content = \"Unicode Snowman: ☃\".encode(\"utf-8\")\n headers = {\"Content-Type\": \"text/plain\"}\n response = httpx.Response(\n 200,\n content=content,\n headers=headers,\n )\n assert response.text == \"Unicode Snowman: ☃\"\n assert response.encoding == \"utf-8\"\n\n\ndef test_response_no_charset_with_iso_8859_1_content():\n \"\"\"\n A response with ISO 8859-1 encoded content should decode correctly,\n even with no charset specified, if autodetect is enabled.\n \"\"\"\n content = \"Accented: Österreich abcdefghijklmnopqrstuzwxyz\".encode(\"iso-8859-1\")\n headers = {\"Content-Type\": \"text/plain\"}\n response = httpx.Response(\n 200, content=content, headers=headers, default_encoding=autodetect\n )\n assert response.text == \"Accented: Österreich abcdefghijklmnopqrstuzwxyz\"\n assert response.charset_encoding is None\n\n\ndef test_response_no_charset_with_cp_1252_content():\n \"\"\"\n A response with Windows 1252 encoded content should decode correctly,\n even with no charset specified, if autodetect is enabled.\n \"\"\"\n content = \"Euro Currency: € abcdefghijklmnopqrstuzwxyz\".encode(\"cp1252\")\n headers = {\"Content-Type\": \"text/plain\"}\n response = httpx.Response(\n 200, content=content, headers=headers, default_encoding=autodetect\n )\n assert response.text == \"Euro Currency: € abcdefghijklmnopqrstuzwxyz\"\n assert response.charset_encoding is None\n\n\ndef test_response_non_text_encoding():\n \"\"\"\n Default to attempting utf-8 encoding for non-text content-type headers.\n \"\"\"\n headers = {\"Content-Type\": \"image/png\"}\n response = httpx.Response(\n 200,\n content=b\"xyz\",\n headers=headers,\n )\n assert response.text == \"xyz\"\n assert response.encoding == \"utf-8\"\n\n\ndef test_response_set_explicit_encoding():\n headers = {\n \"Content-Type\": \"text-plain; charset=utf-8\"\n } # Deliberately incorrect charset\n response = httpx.Response(\n 200,\n content=\"Latin 1: ÿ\".encode(\"latin-1\"),\n headers=headers,\n )\n response.encoding = \"latin-1\"\n assert response.text == \"Latin 1: ÿ\"\n assert response.encoding == \"latin-1\"\n\n\ndef test_response_force_encoding():\n response = httpx.Response(\n 200,\n content=\"Snowman: ☃\".encode(\"utf-8\"),\n )\n response.encoding = \"iso-8859-1\"\n assert response.status_code == 200\n assert response.reason_phrase == \"OK\"\n assert response.text == \"Snowman: â\\x98\\x83\"\n assert response.encoding == \"iso-8859-1\"\n\n\ndef test_read():\n response = httpx.Response(\n 200,\n content=b\"Hello, world!\",\n )\n\n assert response.status_code == 200\n assert response.text == \"Hello, world!\"\n assert response.encoding == \"utf-8\"\n assert response.is_closed\n\n content = response.read()\n\n assert content == b\"Hello, world!\"\n assert response.content == b\"Hello, world!\"\n assert response.is_closed\n\n\ndef test_empty_read():\n response = httpx.Response(200)\n\n assert response.status_code == 200\n assert response.text == \"\"\n assert response.encoding == \"utf-8\"\n assert response.is_closed\n\n content = response.read()\n\n assert content == b\"\"\n assert response.content == b\"\"\n assert response.is_closed\n\n\n@pytest.mark.anyio\nasync def test_aread():\n response = httpx.Response(\n 200,\n content=b\"Hello, world!\",\n )\n\n assert response.status_code == 200\n assert response.text == \"Hello, world!\"\n assert response.encoding == \"utf-8\"\n assert response.is_closed\n\n content = await response.aread()\n\n assert content == b\"Hello, world!\"\n assert response.content == b\"Hello, world!\"\n assert response.is_closed\n\n\n@pytest.mark.anyio\nasync def test_empty_aread():\n response = httpx.Response(200)\n\n assert response.status_code == 200\n assert response.text == \"\"\n assert response.encoding == \"utf-8\"\n assert response.is_closed\n\n content = await response.aread()\n\n assert content == b\"\"\n assert response.content == b\"\"\n assert response.is_closed\n\n\ndef test_iter_raw():\n response = httpx.Response(\n 200,\n content=streaming_body(),\n )\n\n raw = b\"\"\n for part in response.iter_raw():\n raw += part\n assert raw == b\"Hello, world!\"\n\n\ndef test_iter_raw_with_chunksize():\n response = httpx.Response(200, content=streaming_body())\n parts = [part for part in response.iter_raw(chunk_size=5)]\n assert parts == [b\"Hello\", b\", wor\", b\"ld!\"]\n\n response = httpx.Response(200, content=streaming_body())\n parts = [part for part in response.iter_raw(chunk_size=7)]\n assert parts == [b\"Hello, \", b\"world!\"]\n\n response = httpx.Response(200, content=streaming_body())\n parts = [part for part in response.iter_raw(chunk_size=13)]\n assert parts == [b\"Hello, world!\"]\n\n response = httpx.Response(200, content=streaming_body())\n parts = [part for part in response.iter_raw(chunk_size=20)]\n assert parts == [b\"Hello, world!\"]\n\n\ndef test_iter_raw_doesnt_return_empty_chunks():\n def streaming_body_with_empty_chunks() -> typing.Iterator[bytes]:\n yield b\"Hello, \"\n yield b\"\"\n yield b\"world!\"\n yield b\"\"\n\n response = httpx.Response(200, content=streaming_body_with_empty_chunks())\n\n parts = [part for part in response.iter_raw()]\n assert parts == [b\"Hello, \", b\"world!\"]\n\n\ndef test_iter_raw_on_iterable():\n response = httpx.Response(\n 200,\n content=StreamingBody(),\n )\n\n raw = b\"\"\n for part in response.iter_raw():\n raw += part\n assert raw == b\"Hello, world!\"\n\n\ndef test_iter_raw_on_async():\n response = httpx.Response(\n 200,\n content=async_streaming_body(),\n )\n\n with pytest.raises(RuntimeError):\n [part for part in response.iter_raw()]\n\n\ndef test_close_on_async():\n response = httpx.Response(\n 200,\n content=async_streaming_body(),\n )\n\n with pytest.raises(RuntimeError):\n response.close()\n\n\ndef test_iter_raw_increments_updates_counter():\n response = httpx.Response(200, content=streaming_body())\n\n num_downloaded = response.num_bytes_downloaded\n for part in response.iter_raw():\n assert len(part) == (response.num_bytes_downloaded - num_downloaded)\n num_downloaded = response.num_bytes_downloaded\n\n\n@pytest.mark.anyio\nasync def test_aiter_raw():\n response = httpx.Response(200, content=async_streaming_body())\n\n raw = b\"\"\n async for part in response.aiter_raw():\n raw += part\n assert raw == b\"Hello, world!\"\n\n\n@pytest.mark.anyio\nasync def test_aiter_raw_with_chunksize():\n response = httpx.Response(200, content=async_streaming_body())\n\n parts = [part async for part in response.aiter_raw(chunk_size=5)]\n assert parts == [b\"Hello\", b\", wor\", b\"ld!\"]\n\n response = httpx.Response(200, content=async_streaming_body())\n\n parts = [part async for part in response.aiter_raw(chunk_size=13)]\n assert parts == [b\"Hello, world!\"]\n\n response = httpx.Response(200, content=async_streaming_body())\n\n parts = [part async for part in response.aiter_raw(chunk_size=20)]\n assert parts == [b\"Hello, world!\"]\n\n\n@pytest.mark.anyio\nasync def test_aiter_raw_on_sync():\n response = httpx.Response(\n 200,\n content=streaming_body(),\n )\n\n with pytest.raises(RuntimeError):\n [part async for part in response.aiter_raw()]\n\n\n@pytest.mark.anyio\nasync def test_aclose_on_sync():\n response = httpx.Response(\n 200,\n content=streaming_body(),\n )\n\n with pytest.raises(RuntimeError):\n await response.aclose()\n\n\n@pytest.mark.anyio\nasync def test_aiter_raw_increments_updates_counter():\n response = httpx.Response(200, content=async_streaming_body())\n\n num_downloaded = response.num_bytes_downloaded\n async for part in response.aiter_raw():\n assert len(part) == (response.num_bytes_downloaded - num_downloaded)\n num_downloaded = response.num_bytes_downloaded\n\n\ndef test_iter_bytes():\n response = httpx.Response(200, content=b\"Hello, world!\")\n\n content = b\"\"\n for part in response.iter_bytes():\n content += part\n assert content == b\"Hello, world!\"\n\n\ndef test_iter_bytes_with_chunk_size():\n response = httpx.Response(200, content=streaming_body())\n parts = [part for part in response.iter_bytes(chunk_size=5)]\n assert parts == [b\"Hello\", b\", wor\", b\"ld!\"]\n\n response = httpx.Response(200, content=streaming_body())\n parts = [part for part in response.iter_bytes(chunk_size=13)]\n assert parts == [b\"Hello, world!\"]\n\n response = httpx.Response(200, content=streaming_body())\n parts = [part for part in response.iter_bytes(chunk_size=20)]\n assert parts == [b\"Hello, world!\"]\n\n\ndef test_iter_bytes_with_empty_response():\n response = httpx.Response(200, content=b\"\")\n parts = [part for part in response.iter_bytes()]\n assert parts == []\n\n\ndef test_iter_bytes_doesnt_return_empty_chunks():\n def streaming_body_with_empty_chunks() -> typing.Iterator[bytes]:\n yield b\"Hello, \"\n yield b\"\"\n yield b\"world!\"\n yield b\"\"\n\n response = httpx.Response(200, content=streaming_body_with_empty_chunks())\n\n parts = [part for part in response.iter_bytes()]\n assert parts == [b\"Hello, \", b\"world!\"]\n\n\n@pytest.mark.anyio\nasync def test_aiter_bytes():\n response = httpx.Response(\n 200,\n content=b\"Hello, world!\",\n )\n\n content = b\"\"\n async for part in response.aiter_bytes():\n content += part\n assert content == b\"Hello, world!\"\n\n\n@pytest.mark.anyio\nasync def test_aiter_bytes_with_chunk_size():\n response = httpx.Response(200, content=async_streaming_body())\n parts = [part async for part in response.aiter_bytes(chunk_size=5)]\n assert parts == [b\"Hello\", b\", wor\", b\"ld!\"]\n\n response = httpx.Response(200, content=async_streaming_body())\n parts = [part async for part in response.aiter_bytes(chunk_size=13)]\n assert parts == [b\"Hello, world!\"]\n\n response = httpx.Response(200, content=async_streaming_body())\n parts = [part async for part in response.aiter_bytes(chunk_size=20)]\n assert parts == [b\"Hello, world!\"]\n\n\ndef test_iter_text():\n response = httpx.Response(\n 200,\n content=b\"Hello, world!\",\n )\n\n content = \"\"\n for part in response.iter_text():\n content += part\n assert content == \"Hello, world!\"\n\n\ndef test_iter_text_with_chunk_size():\n response = httpx.Response(200, content=b\"Hello, world!\")\n parts = [part for part in response.iter_text(chunk_size=5)]\n assert parts == [\"Hello\", \", wor\", \"ld!\"]\n\n response = httpx.Response(200, content=b\"Hello, world!!\")\n parts = [part for part in response.iter_text(chunk_size=7)]\n assert parts == [\"Hello, \", \"world!!\"]\n\n response = httpx.Response(200, content=b\"Hello, world!\")\n parts = [part for part in response.iter_text(chunk_size=7)]\n assert parts == [\"Hello, \", \"world!\"]\n\n response = httpx.Response(200, content=b\"Hello, world!\")\n parts = [part for part in response.iter_text(chunk_size=13)]\n assert parts == [\"Hello, world!\"]\n\n response = httpx.Response(200, content=b\"Hello, world!\")\n parts = [part for part in response.iter_text(chunk_size=20)]\n assert parts == [\"Hello, world!\"]\n\n\n@pytest.mark.anyio\nasync def test_aiter_text():\n response = httpx.Response(\n 200,\n content=b\"Hello, world!\",\n )\n\n content = \"\"\n async for part in response.aiter_text():\n content += part\n assert content == \"Hello, world!\"\n\n\n@pytest.mark.anyio\nasync def test_aiter_text_with_chunk_size():\n response = httpx.Response(200, content=b\"Hello, world!\")\n parts = [part async for part in response.aiter_text(chunk_size=5)]\n assert parts == [\"Hello\", \", wor\", \"ld!\"]\n\n response = httpx.Response(200, content=b\"Hello, world!\")\n parts = [part async for part in response.aiter_text(chunk_size=13)]\n assert parts == [\"Hello, world!\"]\n\n response = httpx.Response(200, content=b\"Hello, world!\")\n parts = [part async for part in response.aiter_text(chunk_size=20)]\n assert parts == [\"Hello, world!\"]\n\n\ndef test_iter_lines():\n response = httpx.Response(\n 200,\n content=b\"Hello,\\nworld!\",\n )\n content = [line for line in response.iter_lines()]\n assert content == [\"Hello,\", \"world!\"]\n\n\n@pytest.mark.anyio\nasync def test_aiter_lines():\n response = httpx.Response(\n 200,\n content=b\"Hello,\\nworld!\",\n )\n\n content = []\n async for line in response.aiter_lines():\n content.append(line)\n assert content == [\"Hello,\", \"world!\"]\n\n\ndef test_sync_streaming_response():\n response = httpx.Response(\n 200,\n content=streaming_body(),\n )\n\n assert response.status_code == 200\n assert not response.is_closed\n\n content = response.read()\n\n assert content == b\"Hello, world!\"\n assert response.content == b\"Hello, world!\"\n assert response.is_closed\n\n\n@pytest.mark.anyio\nasync def test_async_streaming_response():\n response = httpx.Response(\n 200,\n content=async_streaming_body(),\n )\n\n assert response.status_code == 200\n assert not response.is_closed\n\n content = await response.aread()\n\n assert content == b\"Hello, world!\"\n assert response.content == b\"Hello, world!\"\n assert response.is_closed\n\n\ndef test_cannot_read_after_stream_consumed():\n response = httpx.Response(\n 200,\n content=streaming_body(),\n )\n\n content = b\"\"\n for part in response.iter_bytes():\n content += part\n\n with pytest.raises(httpx.StreamConsumed):\n response.read()\n\n\n@pytest.mark.anyio\nasync def test_cannot_aread_after_stream_consumed():\n response = httpx.Response(\n 200,\n content=async_streaming_body(),\n )\n\n content = b\"\"\n async for part in response.aiter_bytes():\n content += part\n\n with pytest.raises(httpx.StreamConsumed):\n await response.aread()\n\n\ndef test_cannot_read_after_response_closed():\n response = httpx.Response(\n 200,\n content=streaming_body(),\n )\n\n response.close()\n with pytest.raises(httpx.StreamClosed):\n response.read()\n\n\n@pytest.mark.anyio\nasync def test_cannot_aread_after_response_closed():\n response = httpx.Response(\n 200,\n content=async_streaming_body(),\n )\n\n await response.aclose()\n with pytest.raises(httpx.StreamClosed):\n await response.aread()\n\n\n@pytest.mark.anyio\nasync def test_elapsed_not_available_until_closed():\n response = httpx.Response(\n 200,\n content=async_streaming_body(),\n )\n\n with pytest.raises(RuntimeError):\n response.elapsed # noqa: B018\n\n\ndef test_unknown_status_code():\n response = httpx.Response(\n 600,\n )\n assert response.status_code == 600\n assert response.reason_phrase == \"\"\n assert response.text == \"\"\n\n\ndef test_json_with_specified_encoding():\n data = {\"greeting\": \"hello\", \"recipient\": \"world\"}\n content = json.dumps(data).encode(\"utf-16\")\n headers = {\"Content-Type\": \"application/json, charset=utf-16\"}\n response = httpx.Response(\n 200,\n content=content,\n headers=headers,\n )\n assert response.json() == data\n\n\ndef test_json_with_options():\n data = {\"greeting\": \"hello\", \"recipient\": \"world\", \"amount\": 1}\n content = json.dumps(data).encode(\"utf-16\")\n headers = {\"Content-Type\": \"application/json, charset=utf-16\"}\n response = httpx.Response(\n 200,\n content=content,\n headers=headers,\n )\n assert response.json(parse_int=str)[\"amount\"] == \"1\"\n\n\n@pytest.mark.parametrize(\n \"encoding\",\n [\n \"utf-8\",\n \"utf-8-sig\",\n \"utf-16\",\n \"utf-16-be\",\n \"utf-16-le\",\n \"utf-32\",\n \"utf-32-be\",\n \"utf-32-le\",\n ],\n)\ndef test_json_without_specified_charset(encoding):\n data = {\"greeting\": \"hello\", \"recipient\": \"world\"}\n content = json.dumps(data).encode(encoding)\n headers = {\"Content-Type\": \"application/json\"}\n response = httpx.Response(\n 200,\n content=content,\n headers=headers,\n )\n assert response.json() == data\n\n\n@pytest.mark.parametrize(\n \"encoding\",\n [\n \"utf-8\",\n \"utf-8-sig\",\n \"utf-16\",\n \"utf-16-be\",\n \"utf-16-le\",\n \"utf-32\",\n \"utf-32-be\",\n \"utf-32-le\",\n ],\n)\ndef test_json_with_specified_charset(encoding):\n data = {\"greeting\": \"hello\", \"recipient\": \"world\"}\n content = json.dumps(data).encode(encoding)\n headers = {\"Content-Type\": f\"application/json; charset={encoding}\"}\n response = httpx.Response(\n 200,\n content=content,\n headers=headers,\n )\n assert response.json() == data\n\n\n@pytest.mark.parametrize(\n \"headers, expected\",\n [\n (\n {\"Link\": \"<https://example.com>; rel='preload'\"},\n {\"preload\": {\"rel\": \"preload\", \"url\": \"https://example.com\"}},\n ),\n (\n {\"Link\": '</hub>; rel=\"hub\", </resource>; rel=\"self\"'},\n {\n \"hub\": {\"url\": \"/hub\", \"rel\": \"hub\"},\n \"self\": {\"url\": \"/resource\", \"rel\": \"self\"},\n },\n ),\n ],\n)\ndef test_link_headers(headers, expected):\n response = httpx.Response(\n 200,\n content=None,\n headers=headers,\n )\n assert response.links == expected\n\n\n@pytest.mark.parametrize(\"header_value\", (b\"deflate\", b\"gzip\", b\"br\"))\ndef test_decode_error_with_request(header_value):\n headers = [(b\"Content-Encoding\", header_value)]\n broken_compressed_body = b\"xxxxxxxxxxxxxx\"\n with pytest.raises(httpx.DecodingError):\n httpx.Response(\n 200,\n headers=headers,\n content=broken_compressed_body,\n )\n\n with pytest.raises(httpx.DecodingError):\n httpx.Response(\n 200,\n headers=headers,\n content=broken_compressed_body,\n request=httpx.Request(\"GET\", \"https://www.example.org/\"),\n )\n\n\n@pytest.mark.parametrize(\"header_value\", (b\"deflate\", b\"gzip\", b\"br\"))\ndef test_value_error_without_request(header_value):\n headers = [(b\"Content-Encoding\", header_value)]\n broken_compressed_body = b\"xxxxxxxxxxxxxx\"\n with pytest.raises(httpx.DecodingError):\n httpx.Response(200, headers=headers, content=broken_compressed_body)\n\n\ndef test_response_with_unset_request():\n response = httpx.Response(200, content=b\"Hello, world!\")\n\n assert response.status_code == 200\n assert response.reason_phrase == \"OK\"\n assert response.text == \"Hello, world!\"\n assert not response.is_error\n\n\ndef test_set_request_after_init():\n response = httpx.Response(200, content=b\"Hello, world!\")\n\n response.request = httpx.Request(\"GET\", \"https://www.example.org\")\n\n assert response.request.method == \"GET\"\n assert response.request.url == \"https://www.example.org\"\n\n\ndef test_cannot_access_unset_request():\n response = httpx.Response(200, content=b\"Hello, world!\")\n\n with pytest.raises(RuntimeError):\n response.request # noqa: B018\n\n\ndef test_generator_with_transfer_encoding_header():\n def content() -> typing.Iterator[bytes]:\n yield b\"test 123\" # pragma: no cover\n\n response = httpx.Response(200, content=content())\n assert response.headers == {\"Transfer-Encoding\": \"chunked\"}\n\n\ndef test_generator_with_content_length_header():\n def content() -> typing.Iterator[bytes]:\n yield b\"test 123\" # pragma: no cover\n\n headers = {\"Content-Length\": \"8\"}\n response = httpx.Response(200, content=content(), headers=headers)\n assert response.headers == {\"Content-Length\": \"8\"}\n\n\ndef test_response_picklable():\n response = httpx.Response(\n 200,\n content=b\"Hello, world!\",\n request=httpx.Request(\"GET\", \"https://example.org\"),\n )\n pickle_response = pickle.loads(pickle.dumps(response))\n assert pickle_response.is_closed is True\n assert pickle_response.is_stream_consumed is True\n assert pickle_response.next_request is None\n assert pickle_response.stream is not None\n assert pickle_response.content == b\"Hello, world!\"\n assert pickle_response.status_code == 200\n assert pickle_response.request.url == response.request.url\n assert pickle_response.extensions == {}\n assert pickle_response.history == []\n\n\n@pytest.mark.anyio\nasync def test_response_async_streaming_picklable():\n response = httpx.Response(200, content=async_streaming_body())\n pickle_response = pickle.loads(pickle.dumps(response))\n with pytest.raises(httpx.ResponseNotRead):\n pickle_response.content # noqa: B018\n with pytest.raises(httpx.StreamClosed):\n await pickle_response.aread()\n assert pickle_response.is_stream_consumed is False\n assert pickle_response.num_bytes_downloaded == 0\n assert pickle_response.headers == {\"Transfer-Encoding\": \"chunked\"}\n\n response = httpx.Response(200, content=async_streaming_body())\n await response.aread()\n pickle_response = pickle.loads(pickle.dumps(response))\n assert pickle_response.is_stream_consumed is True\n assert pickle_response.content == b\"Hello, world!\"\n assert pickle_response.num_bytes_downloaded == 13\n\n\ndef test_response_decode_text_using_autodetect():\n # Ensure that a 'default_encoding=\"autodetect\"' on the response allows for\n # encoding autodetection to be used when no \"Content-Type: text/plain; charset=...\"\n # info is present.\n #\n # Here we have some french text encoded with ISO-8859-1, rather than UTF-8.\n text = (\n \"Non-seulement Despréaux ne se trompait pas, mais de tous les écrivains \"\n \"que la France a produits, sans excepter Voltaire lui-même, imprégné de \"\n \"l'esprit anglais par son séjour à Londres, c'est incontestablement \"\n \"Molière ou Poquelin qui reproduit avec l'exactitude la plus vive et la \"\n \"plus complète le fond du génie français.\"\n )\n content = text.encode(\"ISO-8859-1\")\n response = httpx.Response(200, content=content, default_encoding=autodetect)\n\n assert response.status_code == 200\n assert response.reason_phrase == \"OK\"\n assert response.encoding == \"ISO-8859-1\"\n assert response.text == text\n\n\ndef test_response_decode_text_using_explicit_encoding():\n # Ensure that a 'default_encoding=\"...\"' on the response is used for text decoding\n # when no \"Content-Type: text/plain; charset=...\"\" info is present.\n #\n # Here we have some french text encoded with Windows-1252, rather than UTF-8.\n # https://en.wikipedia.org/wiki/Windows-1252\n text = (\n \"Non-seulement Despréaux ne se trompait pas, mais de tous les écrivains \"\n \"que la France a produits, sans excepter Voltaire lui-même, imprégné de \"\n \"l'esprit anglais par son séjour à Londres, c'est incontestablement \"\n \"Molière ou Poquelin qui reproduit avec l'exactitude la plus vive et la \"\n \"plus complète le fond du génie français.\"\n )\n content = text.encode(\"cp1252\")\n response = httpx.Response(200, content=content, default_encoding=\"cp1252\")\n\n assert response.status_code == 200\n assert response.reason_phrase == \"OK\"\n assert response.encoding == \"cp1252\"\n assert response.text == text\n", "path": "tests/models/test_responses.py" } ]
https://github.com/teamqurrent/httpx
e4241c6
sniffio rfc3986 httpcore>=0.18.0,<0.19.0 certifi idna
diff --git a/CHANGELOG.md b/CHANGELOG.md --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -4,6 +4,12 @@ All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). +## Unreleased + +### Fixed + +* Raise `ValueError` on `Response.encoding` being set after `Response.text` has been accessed. (#2852) + ## 0.25.0 (11th Sep, 2023) ### Removed diff --git a/httpx/_models.py b/httpx/_models.py --- a/httpx/_models.py +++ b/httpx/_models.py @@ -603,6 +603,16 @@ class Response: @encoding.setter def encoding(self, value: str) -> None: + """ + Set the encoding to use for decoding the byte content into text. + + If the `text` attribute has been accessed, attempting to set the + encoding will throw a ValueError. + """ + if hasattr(self, "_text"): + raise ValueError( + "Setting encoding after `text` has been accessed is not allowed." + ) self._encoding = value @property diff --git a/tests/models/test_responses.py b/tests/models/test_responses.py --- a/tests/models/test_responses.py +++ b/tests/models/test_responses.py @@ -298,6 +298,23 @@ def test_response_force_encoding(): assert response.encoding == "iso-8859-1" +def test_response_force_encoding_after_text_accessed(): + response = httpx.Response( + 200, + content=b"Hello, world!", + ) + assert response.status_code == 200 + assert response.reason_phrase == "OK" + assert response.text == "Hello, world!" + assert response.encoding == "utf-8" + + with pytest.raises(ValueError): + response.encoding = "UTF8" + + with pytest.raises(ValueError): + response.encoding = "iso-8859-1" + + def test_read(): response = httpx.Response( 200,
0_4
59df819
python
import sys import unittest import inspect class TestResponseForceEncoding(unittest.TestCase): def test_response_force_encoding_after_text_accessed(self): import httpx response = httpx.Response( 200, content=b"Hello, world!", ) self.assertEqual(response.status_code, 200) self.assertEqual(response.reason_phrase, "OK") self.assertEqual(response.text, "Hello, world!") self.assertEqual(response.encoding, "utf-8") with self.assertRaises(ValueError): response.encoding = "UTF8" with self.assertRaises(ValueError): response.encoding = "iso-8859-1" def main(): suite = unittest.TestSuite() suite.addTests(unittest.TestLoader().loadTestsFromTestCase(TestResponseForceEncoding)) runner = unittest.TextTestRunner() if runner.run(suite).wasSuccessful(): sys.exit(0) else: sys.exit(1) if __name__ == "__main__": main()
Modify the encoding setter method of the `Headers` class to throw a ValueError if the class instance already as a `_text` attribute
python3.9
[{"content":"\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-present Rapptz\n\nPermission is her(...TRUNCATED)
https://github.com/teamqurrent/discord.py
08ef42f
discord
"diff --git a/discord/enums.py b/discord/enums.py\n--- a/discord/enums.py\n+++ b/discord/enums.py\n@(...TRUNCATED)
10_0
2a59e028
python
"import unittest\nimport sys\n\n\n\nclass TestLocaleAddition(unittest.TestCase):\n def test_latin(...TRUNCATED)
"Your task is to add support for the Latin American Spanish locale to the discord.py library. This i(...TRUNCATED)
python3.9
[{"content":"\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-present Rapptz\n\nPermission is her(...TRUNCATED)
https://github.com/teamqurrent/discord.py
9db0dad
discord
"diff --git a/discord/enums.py b/discord/enums.py\n--- a/discord/enums.py\n+++ b/discord/enums.py\n@(...TRUNCATED)
10_1
08ef42fe
python
"import unittest\nimport sys\n\n\n\nclass TestNewIncidentMessageTypes(unittest.TestCase):\n def s(...TRUNCATED)
"Your task is to introduce new message types related to guild incidents in the discord.py library. S(...TRUNCATED)
python3.9
[{"content":"\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-present Rapptz\n\nPermission is her(...TRUNCATED)
https://github.com/teamqurrent/discord.py
99618c8
discord
"diff --git a/discord/state.py b/discord/state.py\n--- a/discord/state.py\n+++ b/discord/state.py\n@(...TRUNCATED)
10_10
7d159920
python
"import unittest\nimport sys\nimport inspect\n\n\n\nclass TestEntitlementDeleteEventDispatch(unittes(...TRUNCATED)
"in the `discord/state.py` file, locate the `parse_entitlement_delete` method within the `Connection(...TRUNCATED)
python3.9
[{"content":"\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-present Rapptz\n\nPermission is her(...TRUNCATED)
https://github.com/teamqurrent/discord.py
9810cb9
discord
"diff --git a/discord/appinfo.py b/discord/appinfo.py\n--- a/discord/appinfo.py\n+++ b/discord/appin(...TRUNCATED)
10_3
56c67d39
python
"import unittest\nimport asyncio\nimport sys\n\n\n\nclass TestEditApplicationInfo(unittest.TestCase)(...TRUNCATED)
"Enhance the `HTTPClient` class in `http.py` to allow editing various application details via Discor(...TRUNCATED)
python3.9
[{"content":"\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-present Rapptz\n\nPermission is her(...TRUNCATED)
https://github.com/teamqurrent/discord.py
933460c
discord
"diff --git a/discord/automod.py b/discord/automod.py\n--- a/discord/automod.py\n+++ b/discord/autom(...TRUNCATED)
10_4
e1c1a72a
python
"import unittest\nimport datetime\nimport sys\n\n\n\nclass TestAutoModRuleAction(unittest.TestCase):(...TRUNCATED)
"Enhance the `AutoModRuleAction` class in `automod.py` by modifying its `__init__` method to handle (...TRUNCATED)

Dataset Summary

RES-Q is a codebase editing benchmark based on compact, natural language instructions. The task is to, given an edit instruction and a codebase, produce a patch file that makes the correct edit to the codebase.

The dataset was released as part of RES-Q: Evaluating Code-Editing Large Language Model Systems at the Repository Scale

Dataset Structure

An example of a RES-Q task instance is as follows:

id (str) - A unique identifier for the task instance.
repo_url (str) - The URL of the repository involved in the task.
instruction (str) - The repository edit instruction.
base_commit (str) - The commit hash of the repository representing the HEAD of the repository immediately before the instruction was carried out.
test_script (str) - The task's test suite as a python script to be run from the root of the repository. 
testbed_environment (str) - The python version used to run the test suite. 
requirements_txt (str) - The pip package dependencies required to run the test suite.
solution_commit (str) - The commit hash of the repository representing the HEAD of the repository immediately after the instruction was carried out.
solution_patch (str) - The patch in the unified diff format representing the difference between the base and solution commit.
modified_files (list): A list of dictionaries containing the relative paths and content of files modified by the solution commit.
language (str) - The primary programming language of the repository.
Downloads last month
27
Edit dataset card