source
stringclasses
40 values
file_type
stringclasses
1 value
chunk
stringlengths
3
512
id
int64
0
1.5k
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
Example: ```py >>> from huggingface_hub import save_torch_state_dict >>> model = ... # A PyTorch model
300
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
>>> state_dict = model_to_save.state_dict() >>> save_torch_state_dict(state_dict, "path/to/folder") ``` ``` The `serialization` module also contains low-level helpers to split a state dictionary into several shards, while creating a proper index in the process. These helpers are available for `torch` and `tensorflow` tensors and are designed to be easily extended to any other ML frameworks.
301
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
```python Split a model state dictionary in shards so that each shard is smaller than a given size. The shards are determined by iterating through the `state_dict` in the order of its keys. There is no optimization made to make each shard as close as possible to the maximum size passed. For example, if the limit is 10GB and we have tensors of sizes [6GB, 6GB, 2GB, 6GB, 2GB, 2GB] they will get sharded as [6GB], [6+2GB], [6+2+2GB] and not [6+2+2GB], [6+2GB], [6GB]. <Tip warning={true}>
302
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
<Tip warning={true}> If one of the model's tensor is bigger than `max_shard_size`, it will end up in its own shard which will have a size greater than `max_shard_size`. </Tip>
303
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
</Tip> Args: state_dict (`Dict[str, Tensor]`): The state dictionary to save. filename_pattern (`str`, *optional*): The pattern to generate the files names in which the model will be saved. Pattern must be a string that can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix` Defaults to `"tf_model{suffix}.h5"`. max_shard_size (`int` or `str`, *optional*): The maximum size of each shard, in bytes. Defaults to 5GB.
304
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
Returns: [`StateDictSplit`]: A `StateDictSplit` object containing the shards and the index to retrieve them. ```
305
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
```python Split a model state dictionary in shards so that each shard is smaller than a given size. The shards are determined by iterating through the `state_dict` in the order of its keys. There is no optimization made to make each shard as close as possible to the maximum size passed. For example, if the limit is 10GB and we have tensors of sizes [6GB, 6GB, 2GB, 6GB, 2GB, 2GB] they will get sharded as [6GB], [6+2GB], [6+2+2GB] and not [6+2+2GB], [6+2GB], [6GB]. <Tip>
306
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
<Tip> To save a model state dictionary to the disk, see [`save_torch_state_dict`]. This helper uses `split_torch_state_dict_into_shards` under the hood. </Tip> <Tip warning={true}> If one of the model's tensor is bigger than `max_shard_size`, it will end up in its own shard which will have a size greater than `max_shard_size`. </Tip>
307
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
</Tip> Args: state_dict (`Dict[str, torch.Tensor]`): The state dictionary to save. filename_pattern (`str`, *optional*): The pattern to generate the files names in which the model will be saved. Pattern must be a string that can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix` Defaults to `"model{suffix}.safetensors"`. max_shard_size (`int` or `str`, *optional*): The maximum size of each shard, in bytes. Defaults to 5GB.
308
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
Returns: [`StateDictSplit`]: A `StateDictSplit` object containing the shards and the index to retrieve them.
309
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
Example: ```py >>> import json >>> import os >>> from safetensors.torch import save_file as safe_save_file >>> from huggingface_hub import split_torch_state_dict_into_shards >>> def save_state_dict(state_dict: Dict[str, torch.Tensor], save_directory: str): ... state_dict_split = split_torch_state_dict_into_shards(state_dict) ... for filename, tensors in state_dict_split.filename_to_tensors.items(): ... shard = {tensor: state_dict[tensor] for tensor in tensors} ... safe_save_file(
310
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
... shard = {tensor: state_dict[tensor] for tensor in tensors} ... safe_save_file( ... shard, ... os.path.join(save_directory, filename), ... metadata={"format": "pt"}, ... ) ... if state_dict_split.is_sharded: ... index = { ... "metadata": state_dict_split.metadata, ... "weight_map": state_dict_split.tensor_to_filename, ... }
311
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
... "weight_map": state_dict_split.tensor_to_filename, ... } ... with open(os.path.join(save_directory, "model.safetensors.index.json"), "w") as f: ... f.write(json.dumps(index, indent=2)) ``` ```
312
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
This is the underlying factory from which each framework-specific helper is derived. In practice, you are not expected to use this factory directly except if you need to adapt it to a framework that is not yet supported. If that is the case, please let us know by [opening a new issue](https://github.com/huggingface/huggingface_hub/issues/new) on the `huggingface_hub` repo.
313
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
```python Split a model state dictionary in shards so that each shard is smaller than a given size. The shards are determined by iterating through the `state_dict` in the order of its keys. There is no optimization made to make each shard as close as possible to the maximum size passed. For example, if the limit is 10GB and we have tensors of sizes [6GB, 6GB, 2GB, 6GB, 2GB, 2GB] they will get sharded as [6GB], [6+2GB], [6+2+2GB] and not [6+2+2GB], [6+2GB], [6GB]. <Tip warning={true}>
314
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
<Tip warning={true}> If one of the model's tensor is bigger than `max_shard_size`, it will end up in its own shard which will have a size greater than `max_shard_size`. </Tip>
315
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
Args: state_dict (`Dict[str, Tensor]`): The state dictionary to save. get_storage_size (`Callable[[Tensor], int]`): A function that returns the size of a tensor when saved on disk in bytes. get_storage_id (`Callable[[Tensor], Optional[Any]]`, *optional*): A function that returns a unique identifier to a tensor storage. Multiple different tensors can share the same underlying storage. This identifier is guaranteed to be unique and constant for this tensor's storage
316
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
same underlying storage. This identifier is guaranteed to be unique and constant for this tensor's storage during its lifetime. Two tensor storages with non-overlapping lifetimes may have the same id. filename_pattern (`str`, *optional*): The pattern to generate the files names in which the model will be saved. Pattern must be a string that can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix` max_shard_size (`int` or `str`, *optional*):
317
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
max_shard_size (`int` or `str`, *optional*): The maximum size of each shard, in bytes. Defaults to 5GB.
318
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
Returns: [`StateDictSplit`]: A `StateDictSplit` object containing the shards and the index to retrieve them. ```
319
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
The loading helpers support both single-file and sharded checkpoints in either safetensors or pickle format. [`load_torch_model`] takes a `nn.Module` and a checkpoint path (either a single file or a directory) as input and load the weights into the model.
320
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
```python Load a checkpoint into a model, handling both sharded and non-sharded checkpoints.
321
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
Args: model (`torch.nn.Module`): The model in which to load the checkpoint. checkpoint_path (`str` or `os.PathLike`): Path to either the checkpoint file or directory containing the checkpoint(s). strict (`bool`, *optional*, defaults to `False`): Whether to strictly enforce that the keys in the model state dict match the keys in the checkpoint. safe (`bool`, *optional*, defaults to `True`): If `safe` is True, the safetensors files will be loaded. If `safe` is False, the function
322
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
If `safe` is True, the safetensors files will be loaded. If `safe` is False, the function will first attempt to load safetensors files if they are available, otherwise it will fall back to loading pickle files. `filename_pattern` parameter takes precedence over `safe` parameter. weights_only (`bool`, *optional*, defaults to `False`): If True, only loads the model weights without optimizer states and other metadata. Only supported in PyTorch >= 1.13. map_location (`str` or `torch.device`, *optional*):
323
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
Only supported in PyTorch >= 1.13. map_location (`str` or `torch.device`, *optional*): A `torch.device` object, string or a dict specifying how to remap storage locations. It indicates the location where all tensors should be loaded. mmap (`bool`, *optional*, defaults to `False`): Whether to use memory-mapped file loading. Memory mapping can improve loading performance for large models in PyTorch >= 2.1.0 with zipfile-based checkpoints. filename_pattern (`str`, *optional*):
324
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
for large models in PyTorch >= 2.1.0 with zipfile-based checkpoints. filename_pattern (`str`, *optional*): The pattern to look for the index file. Pattern must be a string that can be formatted with `filename_pattern.format(suffix=...)` and must contain the keyword `suffix` Defaults to `"model{suffix}.safetensors"`. Returns: `NamedTuple`: A named tuple with `missing_keys` and `unexpected_keys` fields.
325
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
Returns: `NamedTuple`: A named tuple with `missing_keys` and `unexpected_keys` fields. - `missing_keys` is a list of str containing the missing keys, i.e. keys that are in the model but not in the checkpoint. - `unexpected_keys` is a list of str containing the unexpected keys, i.e. keys that are in the checkpoint but not in the model.
326
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
Raises: [`FileNotFoundError`](https://docs.python.org/3/library/exceptions.html#FileNotFoundError) If the checkpoint file or directory does not exist. [`ImportError`](https://docs.python.org/3/library/exceptions.html#ImportError) If safetensors or torch is not installed when trying to load a .safetensors file or a PyTorch checkpoint respectively. [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) If the checkpoint path is invalid or if the checkpoint format cannot be determined.
327
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
Example: ```python >>> from huggingface_hub import load_torch_model >>> model = ... # A PyTorch model >>> load_torch_model(model, "path/to/checkpoint") ``` ```
328
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
```python Loads a checkpoint file, handling both safetensors and pickle checkpoint formats.
329
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
Args: checkpoint_file (`str` or `os.PathLike`): Path to the checkpoint file to load. Can be either a safetensors or pickle (`.bin`) checkpoint. map_location (`str` or `torch.device`, *optional*): A `torch.device` object, string or a dict specifying how to remap storage locations. It indicates the location where all tensors should be loaded. weights_only (`bool`, *optional*, defaults to `False`): If True, only loads the model weights without optimizer states and other metadata.
330
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
If True, only loads the model weights without optimizer states and other metadata. Only supported for pickle (`.bin`) checkpoints with PyTorch >= 1.13. Has no effect when loading safetensors files. mmap (`bool`, *optional*, defaults to `False`): Whether to use memory-mapped file loading. Memory mapping can improve loading performance for large models in PyTorch >= 2.1.0 with zipfile-based checkpoints. Has no effect when loading safetensors files, as the `safetensors` library uses memory mapping by default.
331
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
Returns: `Union[Dict[str, "torch.Tensor"], Any]`: The loaded checkpoint. - For safetensors files: always returns a dictionary mapping parameter names to tensors. - For pickle files: returns any Python object that was pickled (commonly a state dict, but could be an entire model, optimizer state, or any other Python object).
332
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
Raises: [`FileNotFoundError`](https://docs.python.org/3/library/exceptions.html#FileNotFoundError) If the checkpoint file does not exist. [`ImportError`](https://docs.python.org/3/library/exceptions.html#ImportError) If safetensors or torch is not installed when trying to load a .safetensors file or a PyTorch checkpoint respectively. [`OSError`](https://docs.python.org/3/library/exceptions.html#OSError) If the checkpoint file format is invalid or if git-lfs files are not properly downloaded.
333
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
If the checkpoint file format is invalid or if git-lfs files are not properly downloaded. [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) If the checkpoint file path is empty or invalid.
334
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
Example: ```python >>> from huggingface_hub import load_state_dict_from_file
335
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
>>> state_dict = load_state_dict_from_file("path/to/model.bin", map_location="cpu") >>> model.load_state_dict(state_dict)
336
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
>>> state_dict = load_state_dict_from_file("path/to/model.safetensors") >>> model.load_state_dict(state_dict) ``` ```
337
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
```python Return unique identifier to a tensor storage. Multiple different tensors can share the same underlying storage. This identifier is guaranteed to be unique and constant for this tensor's storage during its lifetime. Two tensor storages with non-overlapping lifetimes may have the same id. In the case of meta tensors, we return None since we can't tell if they share the same storage.
338
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
Taken from https://github.com/huggingface/transformers/blob/1ecf5f7c982d761b4daaa96719d162c324187c64/src/transformers/pytorch_utils.py#L278. ```
339
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/serialization.md
.md
```python Taken from https://github.com/huggingface/safetensors/blob/08db34094e9e59e2f9218f2df133b7b4aaff5a99/bindings/python/py_src/safetensors/torch.py#L31C1-L41C59 ```
340
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/tensorboard.md
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
341
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/tensorboard.md
.md
TensorBoard is a visualization toolkit for machine learning experimentation. TensorBoard allows tracking and visualizing metrics such as loss and accuracy, visualizing the model graph, viewing histograms, displaying images and much more. TensorBoard is well integrated with the Hugging Face Hub. The Hub automatically detects TensorBoard traces (such as `tfevents`) when pushed to the Hub which starts an instance to visualize them. To get more information about TensorBoard
342
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/tensorboard.md
.md
`tfevents`) when pushed to the Hub which starts an instance to visualize them. To get more information about TensorBoard integration on the Hub, check out [this guide](https://huggingface.co/docs/hub/tensorboard). To benefit from this integration, `huggingface_hub` provides a custom logger to push logs to the Hub. It works as a drop-in replacement for [SummaryWriter](https://tensorboardx.readthedocs.io/en/latest/tensorboard.html) with no extra
343
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/tensorboard.md
.md
drop-in replacement for [SummaryWriter](https://tensorboardx.readthedocs.io/en/latest/tensorboard.html) with no extra code needed. Traces are still saved locally and a background job push them to the Hub at regular interval.
344
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/tensorboard.md
.md
```python Wrapper around the tensorboard's `SummaryWriter` to push training logs to the Hub. Data is logged locally and then pushed to the Hub asynchronously. Pushing data to the Hub is done in a separate thread to avoid blocking the training script. In particular, if the upload fails for any reason (e.g. a connection issue), the main script will not be interrupted. Data is automatically pushed to the Hub every `commit_every` minutes (default to every 5 minutes). <Tip warning={true}>
345
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/tensorboard.md
.md
<Tip warning={true}> `HFSummaryWriter` is experimental. Its API is subject to change in the future without prior notice. </Tip>
346
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/tensorboard.md
.md
Args: repo_id (`str`): The id of the repo to which the logs will be pushed. logdir (`str`, *optional*): The directory where the logs will be written. If not specified, a local directory will be created by the underlying `SummaryWriter` object. commit_every (`int` or `float`, *optional*): The frequency (in minutes) at which the logs will be pushed to the Hub. Defaults to 5 minutes. squash_history (`bool`, *optional*):
347
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/tensorboard.md
.md
squash_history (`bool`, *optional*): Whether to squash the history of the repo after each commit. Defaults to `False`. Squashing commits is useful to avoid degraded performances on the repo when it grows too large. repo_type (`str`, *optional*): The type of the repo to which the logs will be pushed. Defaults to "model". repo_revision (`str`, *optional*): The revision of the repo to which the logs will be pushed. Defaults to "main". repo_private (`bool`, *optional*):
348
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/tensorboard.md
.md
The revision of the repo to which the logs will be pushed. Defaults to "main". repo_private (`bool`, *optional*): Whether to make the repo private. If `None` (default), the repo will be public unless the organization's default is private. This value is ignored if the repo already exists. path_in_repo (`str`, *optional*): The path to the folder in the repo where the logs will be pushed. Defaults to "tensorboard/". repo_allow_patterns (`List[str]` or `str`, *optional*):
349
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/tensorboard.md
.md
repo_allow_patterns (`List[str]` or `str`, *optional*): A list of patterns to include in the upload. Defaults to `"*.tfevents.*"`. Check out the [upload guide](https://huggingface.co/docs/huggingface_hub/guides/upload#upload-a-folder) for more details. repo_ignore_patterns (`List[str]` or `str`, *optional*): A list of patterns to exclude in the upload. Check out the [upload guide](https://huggingface.co/docs/huggingface_hub/guides/upload#upload-a-folder) for more details. token (`str`, *optional*):
350
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/tensorboard.md
.md
token (`str`, *optional*): Authentication token. Will default to the stored token. See https://huggingface.co/settings/token for more details kwargs: Additional keyword arguments passed to `SummaryWriter`.
351
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/tensorboard.md
.md
Examples: ```diff
352
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/tensorboard.md
.md
- from torch.utils.tensorboard import SummaryWriter + from huggingface_hub import HFSummaryWriter import numpy as np - writer = SummaryWriter() + writer = HFSummaryWriter(repo_id="username/my-trained-model") for n_iter in range(100): writer.add_scalar('Loss/train', np.random.random(), n_iter) writer.add_scalar('Loss/test', np.random.random(), n_iter) writer.add_scalar('Accuracy/train', np.random.random(), n_iter) writer.add_scalar('Accuracy/test', np.random.random(), n_iter) ```
353
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/tensorboard.md
.md
```py >>> from huggingface_hub import HFSummaryWriter
354
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/tensorboard.md
.md
>>> with HFSummaryWriter(repo_id="test_hf_logger", commit_every=15) as logger: ... logger.add_scalar("a", 1) ... logger.add_scalar("b", 2) ``` ```
355
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
356
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
Inference is the process of using a trained model to make predictions on new data. As this process can be compute-intensive, running on a dedicated server can be an interesting option. The `huggingface_hub` library provides an easy way to call a service that runs inference for hosted models. There are several services you can connect to: - [Inference API](https://huggingface.co/docs/api-inference/index): a service that allows you to run accelerated inference
357
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
- [Inference API](https://huggingface.co/docs/api-inference/index): a service that allows you to run accelerated inference on Hugging Face's infrastructure for free. This service is a fast way to get started, test different models, and prototype AI products. - [Inference Endpoints](https://huggingface.co/inference-endpoints): a product to easily deploy models to production. Inference is run by Hugging Face in a dedicated, fully managed infrastructure on a cloud provider of your choice.
358
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
Inference is run by Hugging Face in a dedicated, fully managed infrastructure on a cloud provider of your choice. These services can be called with the [`InferenceClient`] object. Please refer to [this guide](../guides/inference) for more information on how to use it.
359
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
```python Initialize a new Inference Client. [`InferenceClient`] aims to provide a unified experience to perform inference. The client can be used seamlessly with either the (free) Inference API or self-hosted Inference Endpoints.
360
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
Args: model (`str`, `optional`): The model to run inference with. Can be a model id hosted on the Hugging Face Hub, e.g. `meta-llama/Meta-Llama-3-8B-Instruct` or a URL to a deployed Inference Endpoint. Defaults to None, in which case a recommended model is automatically selected for the task. Note: for better compatibility with OpenAI's client, `model` has been aliased as `base_url`. Those 2 arguments are mutually exclusive. If using `base_url` for chat completion, the `/chat/completions` suffix
361
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
arguments are mutually exclusive. If using `base_url` for chat completion, the `/chat/completions` suffix path will be appended to the base URL (see the [TGI Messages API](https://huggingface.co/docs/text-generation-inference/en/messages_api) documentation for details). When passing a URL as `model`, the client will not append any suffix path to it. token (`str` or `bool`, *optional*): Hugging Face token. Will default to the locally saved token if not provided.
362
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
token (`str` or `bool`, *optional*): Hugging Face token. Will default to the locally saved token if not provided. Pass `token=False` if you don't want to send your token to the server. Note: for better compatibility with OpenAI's client, `token` has been aliased as `api_key`. Those 2 arguments are mutually exclusive and have the exact same behavior. timeout (`float`, `optional`): The maximum number of seconds to wait for a response from the server. Loading a new model in Inference
363
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
The maximum number of seconds to wait for a response from the server. Loading a new model in Inference API can take up to several minutes. Defaults to None, meaning it will loop until the server is available. headers (`Dict[str, str]`, `optional`): Additional headers to send to the server. By default only the authorization and user-agent headers are sent. Values in this dictionary will override the default values. cookies (`Dict[str, str]`, `optional`): Additional cookies to send to the server.
364
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
cookies (`Dict[str, str]`, `optional`): Additional cookies to send to the server. proxies (`Any`, `optional`): Proxies to use for the request. base_url (`str`, `optional`): Base URL to run inference. This is a duplicated argument from `model` to make [`InferenceClient`] follow the same pattern as `openai.OpenAI` client. Cannot be used if `model` is set. Defaults to None. api_key (`str`, `optional`): Token to use for authentication. This is a duplicated argument from `token` to make [`InferenceClient`]
365
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
Token to use for authentication. This is a duplicated argument from `token` to make [`InferenceClient`] follow the same pattern as `openai.OpenAI` client. Cannot be used if `token` is set. Defaults to None. ```
366
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
An async version of the client is also provided, based on `asyncio` and `aiohttp`. To use it, you can either install `aiohttp` directly or use the `[inference]` extra: ```sh pip install --upgrade huggingface_hub[inference] # or # pip install aiohttp ```
367
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
```python Initialize a new Inference Client. [`InferenceClient`] aims to provide a unified experience to perform inference. The client can be used seamlessly with either the (free) Inference API or self-hosted Inference Endpoints.
368
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
Args: model (`str`, `optional`): The model to run inference with. Can be a model id hosted on the Hugging Face Hub, e.g. `meta-llama/Meta-Llama-3-8B-Instruct` or a URL to a deployed Inference Endpoint. Defaults to None, in which case a recommended model is automatically selected for the task. Note: for better compatibility with OpenAI's client, `model` has been aliased as `base_url`. Those 2 arguments are mutually exclusive. If using `base_url` for chat completion, the `/chat/completions` suffix
369
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
arguments are mutually exclusive. If using `base_url` for chat completion, the `/chat/completions` suffix path will be appended to the base URL (see the [TGI Messages API](https://huggingface.co/docs/text-generation-inference/en/messages_api) documentation for details). When passing a URL as `model`, the client will not append any suffix path to it. token (`str` or `bool`, *optional*): Hugging Face token. Will default to the locally saved token if not provided.
370
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
token (`str` or `bool`, *optional*): Hugging Face token. Will default to the locally saved token if not provided. Pass `token=False` if you don't want to send your token to the server. Note: for better compatibility with OpenAI's client, `token` has been aliased as `api_key`. Those 2 arguments are mutually exclusive and have the exact same behavior. timeout (`float`, `optional`): The maximum number of seconds to wait for a response from the server. Loading a new model in Inference
371
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
The maximum number of seconds to wait for a response from the server. Loading a new model in Inference API can take up to several minutes. Defaults to None, meaning it will loop until the server is available. headers (`Dict[str, str]`, `optional`): Additional headers to send to the server. By default only the authorization and user-agent headers are sent. Values in this dictionary will override the default values. cookies (`Dict[str, str]`, `optional`): Additional cookies to send to the server.
372
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
cookies (`Dict[str, str]`, `optional`): Additional cookies to send to the server. trust_env ('bool', 'optional'): Trust environment settings for proxy configuration if the parameter is `True` (`False` by default). proxies (`Any`, `optional`): Proxies to use for the request. base_url (`str`, `optional`): Base URL to run inference. This is a duplicated argument from `model` to make [`InferenceClient`] follow the same pattern as `openai.OpenAI` client. Cannot be used if `model` is set. Defaults to None.
373
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
follow the same pattern as `openai.OpenAI` client. Cannot be used if `model` is set. Defaults to None. api_key (`str`, `optional`): Token to use for authentication. This is a duplicated argument from `token` to make [`InferenceClient`] follow the same pattern as `openai.OpenAI` client. Cannot be used if `token` is set. Defaults to None. ```
374
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
```python Error raised when a model is unavailable or the request times out. ```
375
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
```python This Dataclass represents the model status in the Hugging Face Inference API.
376
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
Args: loaded (`bool`): If the model is currently loaded into Hugging Face's InferenceAPI. Models are loaded on-demand, leading to the user's first request taking longer. If a model is loaded, you can be assured that it is in a healthy state. state (`str`): The current state of the model. This can be 'Loaded', 'Loadable', 'TooBig'. If a model's state is 'Loadable', it's not too big and has a supported backend. Loadable models are automatically loaded when the user first
377
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
backend. Loadable models are automatically loaded when the user first requests inference on the endpoint. This means it is transparent for the user to load a model, except that the first call takes longer to complete. compute_type (`Dict`): Information about the compute resource the model is using or will use, such as 'gpu' type and number of replicas. framework (`str`): The name of the framework that the model was built with, such as 'transformers' or 'text-generation-inference'. ```
378
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
[`InferenceAPI`] is the legacy way to call the Inference API. The interface is more simplistic and requires knowing the input parameters and output format for each task. It also lacks the ability to connect to other services like Inference Endpoints or AWS SageMaker. [`InferenceAPI`] will soon be deprecated so we recommend using [`InferenceClient`] whenever possible. Check out [this guide](../guides/inference#legacy-inferenceapi-client) to learn how to switch from
379
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
whenever possible. Check out [this guide](../guides/inference#legacy-inferenceapi-client) to learn how to switch from [`InferenceAPI`] to [`InferenceClient`] in your scripts.
380
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
```python Client to configure requests and make calls to the HuggingFace Inference API. Example:
381
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
```python >>> from huggingface_hub.inference_api import InferenceApi >>> # Mask-fill example >>> inference = InferenceApi("bert-base-uncased") >>> inference(inputs="The goal of life is [MASK].") [{'sequence': 'the goal of life is life.', 'score': 0.10933292657136917, 'token': 2166, 'token_str': 'life'}] >>> # Question Answering example >>> inference = InferenceApi("deepset/roberta-base-squad2") >>> inputs = { ... "question": "What's my name?",
382
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
>>> inference = InferenceApi("deepset/roberta-base-squad2") >>> inputs = { ... "question": "What's my name?", ... "context": "My name is Clara and I live in Berkeley.", ... } >>> inference(inputs) {'score': 0.9326569437980652, 'start': 11, 'end': 16, 'answer': 'Clara'} >>> # Zero-shot example >>> inference = InferenceApi("typeform/distilbert-base-uncased-mnli") >>> inputs = "Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!"
383
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
>>> params = {"candidate_labels": ["refund", "legal", "faq"]} >>> inference(inputs, params) {'sequence': 'Hi, I recently bought a device from your company but it is not working as advertised and I would like to get reimbursed!', 'labels': ['refund', 'faq', 'legal'], 'scores': [0.9378499388694763, 0.04914155602455139, 0.013008488342165947]} >>> # Overriding configured task >>> inference = InferenceApi("bert-base-uncased", task="feature-extraction") >>> # Text-to-image
384
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
>>> inference = InferenceApi("bert-base-uncased", task="feature-extraction") >>> # Text-to-image >>> inference = InferenceApi("stabilityai/stable-diffusion-2-1") >>> inference("cat") <PIL.PngImagePlugin.PngImageFile image (...)> >>> # Return as raw response to parse the output yourself >>> inference = InferenceApi("mio/amadeus") >>> response = inference("hello world", raw_response=True) >>> response.headers {"Content-Type": "audio/flac", ...} >>> response.content # raw bytes from server b'(...)' ```
385
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/inference_client.md
.md
>>> response.headers {"Content-Type": "audio/flac", ...} >>> response.content # raw bytes from server b'(...)' ``` ``` - __init__ - __call__ - all
386
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/hf_file_system.md
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
387
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/hf_file_system.md
.md
The `HfFileSystem` class provides a pythonic file interface to the Hugging Face Hub based on [`fsspec`](https://filesystem-spec.readthedocs.io/en/latest/).
388
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/hf_file_system.md
.md
`HfFileSystem` is based on [fsspec](https://filesystem-spec.readthedocs.io/en/latest/), so it is compatible with most of the APIs that it offers. For more details, check out [our guide](../guides/hf_file_system) and fsspec's [API Reference](https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem).
389
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/hf_file_system.md
.md
Error fetching docstring for huggingface_hub.HfFileSystem : No huggingface_hub attribute HfFileSystem - __init__ - all
390
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
391
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
The `Repository` class is a helper class that wraps `git` and `git-lfs` commands. It provides tooling adapted for managing repositories which can be very large. It is the recommended tool as soon as any `git` operation is involved, or when collaboration will be a point of focus with the repository itself.
392
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
```python Helper class to wrap the git and git-lfs commands. The aim is to facilitate interacting with huggingface.co hosted model or dataset repos, though not a lot here (if any) is actually specific to huggingface.co. <Tip warning={true}>
393
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
<Tip warning={true}> [`Repository`] is deprecated in favor of the http-based alternatives implemented in [`HfApi`]. Given its large adoption in legacy code, the complete removal of [`Repository`] will only happen in release `v1.0`. For more details, please read https://huggingface.co/docs/huggingface_hub/concepts/git_vs_http. </Tip> ``` - __init__ - current_branch - all
394
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
```python Check if the folder is the root or part of a git repository Args: folder (`str`): The folder in which to run the command. Returns: `bool`: `True` if the repository is part of a repository, `False` otherwise. ```
395
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
```python Check if the folder is a local clone of the remote_url Args: folder (`str` or `Path`): The folder in which to run the command. remote_url (`str`): The url of a git repository. Returns: `bool`: `True` if the repository is a local clone of the remote repository specified, `False` otherwise. ```
396
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
```python Check if the file passed is tracked with git-lfs. Args: filename (`str` or `Path`): The filename to check. Returns: `bool`: `True` if the file passed is tracked with git-lfs, `False` otherwise. ```
397
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
```python Check if file is git-ignored. Supports nested .gitignore files. Args: filename (`str` or `Path`): The filename to check. Returns: `bool`: `True` if the file passed is ignored by `git`, `False` otherwise. ```
398
/Users/nielsrogge/Documents/python_projecten/huggingface_hub/docs/source/en/package_reference/repository.md
.md
```python Returns a list of filenames that are to be staged. Args: pattern (`str` or `Path`): The pattern of filenames to check. Put `.` to get all files. folder (`str` or `Path`): The folder in which to run the command. Returns: `List[str]`: List of files that are to be staged. ```
399