Utilities
Configure logging
The huggingface_hub
package exposes a logging
utility to control the logging level of the package itself.
You can import it as such:
from huggingface_hub import logging
Then, you may define the verbosity in order to update the amount of logs you’ll see:
from huggingface_hub import logging
logging.set_verbosity_error()
logging.set_verbosity_warning()
logging.set_verbosity_info()
logging.set_verbosity_debug()
logging.set_verbosity(...)
The levels should be understood as follows:
error
: only show critical logs about usage which may result in an error or unexpected behavior.warning
: show logs that aren’t critical but usage may result in unintended behavior. Additionally, important informative logs may be shown.info
: show most logs, including some verbose logging regarding what is happening under the hood. If something is behaving in an unexpected manner, we recommend switching the verbosity level to this in order to get more information.debug
: show all logs, including some internal logs which may be used to track exactly what’s happening under the hood.
Return the current level for the HuggingFace Hub’s root logger.
HuggingFace Hub has following logging levels:
huggingface_hub.logging.CRITICAL
,huggingface_hub.logging.FATAL
huggingface_hub.logging.ERROR
huggingface_hub.logging.WARNING
,huggingface_hub.logging.WARN
huggingface_hub.logging.INFO
huggingface_hub.logging.DEBUG
huggingface_hub.utils.logging.set_verbosity
< source >( verbosity: int )
Sets the level for the HuggingFace Hub’s root logger.
Sets the verbosity to logging.INFO
.
Sets the verbosity to logging.DEBUG
.
Sets the verbosity to logging.WARNING
.
Sets the verbosity to logging.ERROR
.
Disable propagation of the library log outputs. Note that log propagation is disabled by default.
Enable propagation of the library log outputs. Please disable the HuggingFace Hub’s default handler to prevent double logging if the root logger has been configured.
Repo-specific helper methods
The methods exposed below are relevant when modifying modules from the huggingface_hub
library itself.
Using these shouldn’t be necessary if you use huggingface_hub
and you don’t modify them.
huggingface_hub.utils.logging.get_logger
< source >( name: typing.Optional[str] = None )
Returns a logger with the specified name. This function is not supposed to be directly accessed by library users.
Configure progress bars
Progress bars are a useful tool to display information to the user while a long-running task is being executed (e.g.
when downloading or uploading files). huggingface_hub
exposes a tqdm
wrapper to display progress bars in a
consistent way across the library.
By default, progress bars are enabled. You can disable them globally by setting HF_HUB_DISABLE_PROGRESS_BARS
environment variable. You can also enable/disable them using enable_progress_bars() and
disable_progress_bars(). If set, the environment variable has priority on the helpers.
>>> from huggingface_hub import snapshot_download
>>> from huggingface_hub.utils import are_progress_bars_disabled, disable_progress_bars, enable_progress_bars
>>> # Disable progress bars globally
>>> disable_progress_bars()
>>> # Progress bar will not be shown !
>>> snapshot_download("gpt2")
>>> are_progress_bars_disabled()
True
>>> # Re-enable progress bars globally
>>> enable_progress_bars()
Group-specific control of progress bars
You can also enable or disable progress bars for specific groups. This allows you to manage progress bar visibility more granularly within different parts of your application or library. When a progress bar is disabled for a group, all subgroups under it are also affected unless explicitly overridden.
# Disable progress bars for a specific group
>>> disable_progress_bars("peft.foo")
>>> assert not are_progress_bars_disabled("peft")
>>> assert not are_progress_bars_disabled("peft.something")
>>> assert are_progress_bars_disabled("peft.foo")
>>> assert are_progress_bars_disabled("peft.foo.bar")
# Re-enable progress bars for a subgroup
>>> enable_progress_bars("peft.foo.bar")
>>> assert are_progress_bars_disabled("peft.foo")
>>> assert not are_progress_bars_disabled("peft.foo.bar")
# Use groups with tqdm
# No progress bar for `name="peft.foo"`
>>> for _ in tqdm(range(5), name="peft.foo"):
... pass
# Progress bar will be shown for `name="peft.foo.bar"`
>>> for _ in tqdm(range(5), name="peft.foo.bar"):
... pass
100%|███████████████████████████████████████| 5/5 [00:00<00:00, 117817.53it/s]
are_progress_bars_disabled
huggingface_hub.utils.are_progress_bars_disabled
< source >( name: typing.Optional[str] = None ) → bool
Check if progress bars are disabled globally or for a specific group.
This function returns whether progress bars are disabled for a given group or globally.
It checks the HF_HUB_DISABLE_PROGRESS_BARS
environment variable first, then the programmatic
settings.
disable_progress_bars
huggingface_hub.utils.disable_progress_bars
< source >( name: typing.Optional[str] = None )
Disable progress bars either globally or for a specified group.
This function updates the state of progress bars based on a group name.
If no group name is provided, all progress bars are disabled. The operation
respects the HF_HUB_DISABLE_PROGRESS_BARS
environment variable’s setting.
enable_progress_bars
huggingface_hub.utils.enable_progress_bars
< source >( name: typing.Optional[str] = None )
Enable progress bars either globally or for a specified group.
This function sets the progress bars to enabled for the specified group or globally
if no group is specified. The operation is subject to the HF_HUB_DISABLE_PROGRESS_BARS
environment setting.
Configure HTTP backend
In some environments, you might want to configure how HTTP calls are made, for example if you are using a proxy.
huggingface_hub
let you configure this globally using configure_http_backend(). All requests made to the Hub will
then use your settings. Under the hood, huggingface_hub
uses requests.Session
so you might want to refer to the
requests
documentation to learn more about the available
parameters.
Since requests.Session
is not guaranteed to be thread-safe, huggingface_hub
creates one session instance per thread.
Using sessions allows us to keep the connection open between HTTP calls and ultimately save time. If you are
integrating huggingface_hub
in a third-party library and wants to make a custom call to the Hub, use get_session()
to get a Session configured by your users (i.e. replace any requests.get(...)
call by get_session().get(...)
).
huggingface_hub.configure_http_backend
< source >( backend_factory: typing.Callable[[], requests.sessions.Session] = <function _default_backend_factory at 0x7f4f817e77f0> )
Configure the HTTP backend by providing a backend_factory
. Any HTTP calls made by huggingface_hub
will use a
Session object instantiated by this factory. This can be useful if you are running your scripts in a specific
environment requiring custom configuration (e.g. custom proxy or certifications).
Use get_session() to get a configured Session. Since requests.Session
is not guaranteed to be thread-safe,
huggingface_hub
creates 1 Session instance per thread. They are all instantiated using the same backend_factory
set in configure_http_backend(). A LRU cache is used to cache the created sessions (and connections) between
calls. Max size is 128 to avoid memory leaks if thousands of threads are spawned.
See this issue to know more about thread-safety in requests
.
Example:
import requests
from huggingface_hub import configure_http_backend, get_session
# Create a factory function that returns a Session with configured proxies
def backend_factory() -> requests.Session:
session = requests.Session()
session.proxies = {"http": "http://10.10.1.10:3128", "https": "https://10.10.1.11:1080"}
return session
# Set it as the default session factory
configure_http_backend(backend_factory=backend_factory)
# In practice, this is mostly done internally in `huggingface_hub`
session = get_session()
Get a requests.Session
object, using the session factory from the user.
Use get_session() to get a configured Session. Since requests.Session
is not guaranteed to be thread-safe,
huggingface_hub
creates 1 Session instance per thread. They are all instantiated using the same backend_factory
set in configure_http_backend(). A LRU cache is used to cache the created sessions (and connections) between
calls. Max size is 128 to avoid memory leaks if thousands of threads are spawned.
See this issue to know more about thread-safety in requests
.
Example:
import requests
from huggingface_hub import configure_http_backend, get_session
# Create a factory function that returns a Session with configured proxies
def backend_factory() -> requests.Session:
session = requests.Session()
session.proxies = {"http": "http://10.10.1.10:3128", "https": "https://10.10.1.11:1080"}
return session
# Set it as the default session factory
configure_http_backend(backend_factory=backend_factory)
# In practice, this is mostly done internally in `huggingface_hub`
session = get_session()
Handle HTTP errors
huggingface_hub
defines its own HTTP errors to refine the HTTPError
raised by
requests
with additional information sent back by the server.
Raise for status
hf_raise_for_status() is meant to be the central method to “raise for status” from any
request made to the Hub. It wraps the base requests.raise_for_status
to provide
additional information. Any HTTPError
thrown is converted into a HfHubHTTPError
.
import requests
from huggingface_hub.utils import hf_raise_for_status, HfHubHTTPError
response = requests.post(...)
try:
hf_raise_for_status(response)
except HfHubHTTPError as e:
print(str(e)) # formatted message
e.request_id, e.server_message # details returned by server
# Complete the error message with additional information once it's raised
e.append_to_message("\n`create_commit` expects the repository to exist.")
raise
huggingface_hub.utils.hf_raise_for_status
< source >( response: Response endpoint_name: typing.Optional[str] = None )
Internal version of response.raise_for_status()
that will refine a
potential HTTPError. Raised exception will be an instance of HfHubHTTPError
.
This helper is meant to be the unique method to raise_for_status when making a call to the Hugging Face Hub.
Example:
import requests
from huggingface_hub.utils import get_session, hf_raise_for_status, HfHubHTTPError
response = get_session().post(...)
try:
hf_raise_for_status(response)
except HfHubHTTPError as e:
print(str(e)) # formatted message
e.request_id, e.server_message # details returned by server
# Complete the error message with additional information once it's raised
e.append_to_message("
ate_commit` expects the repository to exist.")
raise
Raises when the request has failed:
- RepositoryNotFoundError
If the repository to download from cannot be found. This may be because it
doesn’t exist, because
repo_type
is not set correctly, or because the repo isprivate
and you do not have access. - GatedRepoError If the repository exists but is gated and the user is not on the authorized list.
- RevisionNotFoundError If the repository exists but the revision couldn’t be find.
- EntryNotFoundError If the repository exists but the entry (e.g. the requested file) couldn’t be find.
- BadRequestError If request failed with a HTTP 400 BadRequest error.
- HfHubHTTPError If request failed for a reason not listed above.
HTTP errors
Here is a list of HTTP errors thrown in huggingface_hub
.
HfHubHTTPError
HfHubHTTPError
is the parent class for any HF Hub HTTP error. It takes care of parsing
the server response and format the error message to provide as much information to the
user as possible.
class huggingface_hub.errors.HfHubHTTPError
< source >( message: str response: typing.Optional[requests.models.Response] = None server_message: typing.Optional[str] = None )
HTTPError to inherit from for any custom HTTP Error raised in HF Hub.
Any HTTPError is converted at least into a HfHubHTTPError
. If some information is
sent back by the server, it will be added to the error message.
Added details:
- Request id from “X-Request-Id” header if exists. If not, fallback to “X-Amzn-Trace-Id” header if exists.
- Server error message from the header “X-Error-Message”.
- Server error message if we can found one in the response body.
Example:
import requests
from huggingface_hub.utils import get_session, hf_raise_for_status, HfHubHTTPError
response = get_session().post(...)
try:
hf_raise_for_status(response)
except HfHubHTTPError as e:
print(str(e)) # formatted message
e.request_id, e.server_message # details returned by server
# Complete the error message with additional information once it's raised
e.append_to_message("
ate_commit` expects the repository to exist.")
raise
Append additional information to the HfHubHTTPError
initial message.
RepositoryNotFoundError
class huggingface_hub.errors.RepositoryNotFoundError
< source >( message: str response: typing.Optional[requests.models.Response] = None server_message: typing.Optional[str] = None )
Raised when trying to access a hf.co URL with an invalid repository name, or with a private repo name the user does not have access to.
Example:
>>> from huggingface_hub import model_info
>>> model_info("<non_existent_repository>")
(...)
huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: PvMw_VjBMjVdMz53WKIzP)
Repository Not Found for url: https://huggingface.co/api/models/%3Cnon_existent_repository%3E.
Please make sure you specified the correct `repo_id` and `repo_type`.
If the repo is private, make sure you are authenticated.
Invalid username or password.
GatedRepoError
class huggingface_hub.errors.GatedRepoError
< source >( message: str response: typing.Optional[requests.models.Response] = None server_message: typing.Optional[str] = None )
Raised when trying to access a gated repository for which the user is not on the authorized list.
Note: derives from RepositoryNotFoundError
to ensure backward compatibility.
Example:
>>> from huggingface_hub import model_info
>>> model_info("<gated_repository>")
(...)
huggingface_hub.utils._errors.GatedRepoError: 403 Client Error. (Request ID: ViT1Bf7O_026LGSQuVqfa)
Cannot access gated repo for url https://huggingface.co/api/models/ardent-figment/gated-model.
Access to model ardent-figment/gated-model is restricted and you are not in the authorized list.
Visit https://huggingface.co/ardent-figment/gated-model to ask for access.
RevisionNotFoundError
class huggingface_hub.errors.RevisionNotFoundError
< source >( message: str response: typing.Optional[requests.models.Response] = None server_message: typing.Optional[str] = None )
Raised when trying to access a hf.co URL with a valid repository but an invalid revision.
Example:
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download('bert-base-cased', 'config.json', revision='<non-existent-revision>')
(...)
huggingface_hub.utils._errors.RevisionNotFoundError: 404 Client Error. (Request ID: Mwhe_c3Kt650GcdKEFomX)
Revision Not Found for url: https://huggingface.co/bert-base-cased/resolve/%3Cnon-existent-revision%3E/config.json.
EntryNotFoundError
class huggingface_hub.errors.EntryNotFoundError
< source >( message: str response: typing.Optional[requests.models.Response] = None server_message: typing.Optional[str] = None )
Raised when trying to access a hf.co URL with a valid repository and revision but an invalid filename.
Example:
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download('bert-base-cased', '<non-existent-file>')
(...)
huggingface_hub.utils._errors.EntryNotFoundError: 404 Client Error. (Request ID: 53pNl6M0MxsnG5Sw8JA6x)
Entry Not Found for url: https://huggingface.co/bert-base-cased/resolve/main/%3Cnon-existent-file%3E.
BadRequestError
class huggingface_hub.errors.BadRequestError
< source >( message: str response: typing.Optional[requests.models.Response] = None server_message: typing.Optional[str] = None )
Raised by hf_raise_for_status
when the server returns a HTTP 400 error.
LocalEntryNotFoundError
Raised when trying to access a file or snapshot that is not on the disk when network is disabled or unavailable (connection issue). The entry may exist on the Hub.
Note: ValueError
type is to ensure backward compatibility.
Note: LocalEntryNotFoundError
derives from HTTPError
because of EntryNotFoundError
even when it is not a network issue.
Example:
>>> from huggingface_hub import hf_hub_download
>>> hf_hub_download('bert-base-cased', '<non-cached-file>', local_files_only=True)
(...)
huggingface_hub.utils._errors.LocalEntryNotFoundError: Cannot find the requested files in the disk cache and outgoing traffic has been disabled. To enable hf.co look-ups and downloads online, set 'local_files_only' to False.
OfflineModeIsEnabled
Raised when a request is made but HF_HUB_OFFLINE=1
is set as environment variable.
Telemetry
huggingface_hub
includes an helper to send telemetry data. This information helps us debug issues and prioritize new features.
Users can disable telemetry collection at any time by setting the HF_HUB_DISABLE_TELEMETRY=1
environment variable.
Telemetry is also disabled in offline mode (i.e. when setting HF_HUB_OFFLINE=1).
If you are maintainer of a third-party library, sending telemetry data is as simple as making a call to send_telemetry
.
Data is sent in a separate thread to reduce as much as possible the impact for users.
huggingface_hub.utils.send_telemetry
< source >( topic: str library_name: typing.Optional[str] = None library_version: typing.Optional[str] = None user_agent: typing.Union[typing.Dict, str, NoneType] = None )
Parameters
- topic (
str
) — Name of the topic that is monitored. The topic is directly used to build the URL. If you want to monitor subtopics, just use ”/” separation. Examples: “gradio”, “transformers/examples”,… - library_name (
str
, optional) — The name of the library that is making the HTTP request. Will be added to the user-agent header. - library_version (
str
, optional) — The version of the library that is making the HTTP request. Will be added to the user-agent header. - user_agent (
str
,dict
, optional) — The user agent info in the form of a dictionary or a single string. It will be completed with information about the installed packages.
Sends telemetry that helps tracking usage of different HF libraries.
This usage data helps us debug issues and prioritize new features. However, we understand that not everyone wants
to share additional information, and we respect your privacy. You can disable telemetry collection by setting the
HF_HUB_DISABLE_TELEMETRY=1
as environment variable. Telemetry is also disabled in offline mode (i.e. when setting
HF_HUB_OFFLINE=1
).
Telemetry collection is run in a separate thread to minimize impact for the user.
Example:
>>> from huggingface_hub.utils import send_telemetry
# Send telemetry without library information
>>> send_telemetry("ping")
# Send telemetry to subtopic with library information
>>> send_telemetry("gradio/local_link", library_name="gradio", library_version="3.22.1")
# Send telemetry with additional data
>>> send_telemetry(
... topic="examples",
... library_name="transformers",
... library_version="4.26.0",
... user_agent={"pipeline": "text_classification", "framework": "flax"},
... )
Validators
huggingface_hub
includes custom validators to validate method arguments automatically.
Validation is inspired by the work done in Pydantic
to validate type hints but with more limited features.
Generic decorator
validate_hf_hub_args() is a generic decorator to encapsulate
methods that have arguments following huggingface_hub
’s naming. By default, all
arguments that has a validator implemented will be validated.
If an input is not valid, a HFValidationError is thrown. Only the first non-valid value throws an error and stops the validation process.
Usage:
>>> from huggingface_hub.utils import validate_hf_hub_args
>>> @validate_hf_hub_args
... def my_cool_method(repo_id: str):
... print(repo_id)
>>> my_cool_method(repo_id="valid_repo_id")
valid_repo_id
>>> my_cool_method("other..repo..id")
huggingface_hub.utils._validators.HFValidationError: Cannot have -- or .. in repo_id: 'other..repo..id'.
>>> my_cool_method(repo_id="other..repo..id")
huggingface_hub.utils._validators.HFValidationError: Cannot have -- or .. in repo_id: 'other..repo..id'.
>>> @validate_hf_hub_args
... def my_cool_auth_method(token: str):
... print(token)
>>> my_cool_auth_method(token="a token")
"a token"
>>> my_cool_auth_method(use_auth_token="a use_auth_token")
"a use_auth_token"
>>> my_cool_auth_method(token="a token", use_auth_token="a use_auth_token")
UserWarning: Both `token` and `use_auth_token` are passed (...). `use_auth_token` value will be ignored.
"a token"
validate_hf_hub_args
huggingface_hub.utils.validate_hf_hub_args
< source >( fn: ~CallableT )
Validate values received as argument for any public method of huggingface_hub
.
The goal of this decorator is to harmonize validation of arguments reused everywhere. By default, all defined validators are tested.
Validators:
- validate_repo_id():
repo_id
must be"repo_name"
or"namespace/repo_name"
. Namespace is a username or an organization. - smoothly_deprecate_use_auth_token(): Use
token
instead ofuse_auth_token
(only ifuse_auth_token
is not expected by the decorated function - in practice, always the case inhuggingface_hub
).
Example:
>>> from huggingface_hub.utils import validate_hf_hub_args
>>> @validate_hf_hub_args
... def my_cool_method(repo_id: str):
... print(repo_id)
>>> my_cool_method(repo_id="valid_repo_id")
valid_repo_id
>>> my_cool_method("other..repo..id")
huggingface_hub.utils._validators.HFValidationError: Cannot have -- or .. in repo_id: 'other..repo..id'.
>>> my_cool_method(repo_id="other..repo..id")
huggingface_hub.utils._validators.HFValidationError: Cannot have -- or .. in repo_id: 'other..repo..id'.
>>> @validate_hf_hub_args
... def my_cool_auth_method(token: str):
... print(token)
>>> my_cool_auth_method(token="a token")
"a token"
>>> my_cool_auth_method(use_auth_token="a use_auth_token")
"a use_auth_token"
>>> my_cool_auth_method(token="a token", use_auth_token="a use_auth_token")
UserWarning: Both `token` and `use_auth_token` are passed (...)
"a token"
HFValidationError
Generic exception thrown by huggingface_hub
validators.
Inherits from ValueError
.
Argument validators
Validators can also be used individually. Here is a list of all arguments that can be validated.
repo_id
Validate repo_id
is valid.
This is not meant to replace the proper validation made on the Hub but rather to
avoid local inconsistencies whenever possible (example: passing repo_type
in the
repo_id
is forbidden).
Rules:
- Between 1 and 96 characters.
- Either “repo_name” or “namespace/repo_name”
- [a-zA-Z0-9] or ”-”, ”_”, ”.”
- ”—” and ”..” are forbidden
Valid: "foo"
, "foo/bar"
, "123"
, "Foo-BAR_foo.bar123"
Not valid: "datasets/foo/bar"
, ".repo_id"
, "foo--bar"
, "foo.git"
Example:
>>> from huggingface_hub.utils import validate_repo_id
>>> validate_repo_id(repo_id="valid_repo_id")
>>> validate_repo_id(repo_id="other..repo..id")
huggingface_hub.utils._validators.HFValidationError: Cannot have -- or .. in repo_id: 'other..repo..id'.
Discussed in https://github.com/huggingface/huggingface_hub/issues/1008. In moon-landing (internal repository):
smoothly_deprecate_use_auth_token
Not exactly a validator, but ran as well.
huggingface_hub.utils.smoothly_deprecate_use_auth_token
< source >( fn_name: str has_token: bool kwargs: typing.Dict[str, typing.Any] )
Smoothly deprecate use_auth_token
in the huggingface_hub
codebase.
The long-term goal is to remove any mention of use_auth_token
in the codebase in
favor of a unique and less verbose token
argument. This will be done a few steps:
Step 0: methods that require a read-access to the Hub use the
use_auth_token
argument (str
,bool
orNone
). Methods requiring write-access have atoken
argument (str
,None
). This implicit rule exists to be able to not send the token when not necessary (use_auth_token=False
) even if logged in.Step 1: we want to harmonize everything and use
token
everywhere (supportingtoken=False
for read-only methods). In order not to break existing code, ifuse_auth_token
is passed to a function, theuse_auth_token
value is passed astoken
instead, without any warning. a. Corner case: if bothuse_auth_token
andtoken
values are passed, a warning is thrown and theuse_auth_token
value is ignored.Step 2: Once it is release, we should push downstream libraries to switch from
use_auth_token
totoken
as much as possible, but without throwing a warning (e.g. manually create issues on the corresponding repos).Step 3: After a transitional period (6 months e.g. until April 2023?), we update
huggingface_hub
to throw a warning onuse_auth_token
. Hopefully, very few users will be impacted as it would have already been fixed. In addition, unit tests inhuggingface_hub
must be adapted to expect warnings to be thrown (but still useuse_auth_token
as before).Step 4: After a normal deprecation cycle (3 releases ?), remove this validator.
use_auth_token
will definitely not be supported. In addition, we update unit tests inhuggingface_hub
to usetoken
everywhere.
This has been discussed in: