Spaces:
Running
on
CPU Upgrade
[FEEDBACK] API Playground
Hub API Playground
Let's enjoy and discover the Hugging Face API through this Playground
Don't hesitate to share your feedback or any questions you might have :)
Nice playground! One suggestion would be to add a "snippet" section that generates the code I need to perform the request. Let's say for example I want to search for all
@enzostvs
Spaces and order by likes (see screenshot). It would be nice to be able to get the cURL command equivalent to the one that has been made. Also (maybe more complex) build the corresponding huggingface_hub
(Python) and huggingface.js
(JS) snippet to get the same result. This way we could use the Hub API Playground not only as a test Space but also a way to create complex queries and reuse them later. WDYT?
UI tweak suggestion: I don't find the "Full" and "Config" buttons to be very explicit. I know what they mean because I know the API but for a new user I think they should be better explained. So either something like "Return full details"/"Return model config" or add a small "i" beside the button that can be clicked to show a tooltip.
EDIT: Maybe I have a slight preference for having a clickable tooltip (or anything you found better UX-wise). This way other fields could be also explained. Now that I think about it, "filter" and "search" are very similar semantically.
UI tweak suggestion: I don't find the "Full" and "Config" buttons to be very explicit. I know what they mean because I know the API but for a new user I think they should be better explained. So either something like "Return full details"/"Return model config" or add a small "i" beside the button that can be clicked to show a tooltip.
I added a hover tooltip for now, if it's not practical enough, will add the clickable behaviour!
Nice playground! One suggestion would be to add a "snippet" section that generates the code I need to perform the request. Let's say for example I want to search for all @enzostvs Spaces and order by likes (see screenshot). It would be nice to be able to get the cURL command equivalent to the one that has been made. Also (maybe more complex) build the corresponding
huggingface_hub
(Python) andhuggingface.js
(JS) snippet to get the same result. This way we could use the Hub API Playground not only as a test Space but also a way to create complex queries and reuse them later. WDYT?
Hi, this might be the wrong place to post it but is there any user API to get publicly available information about a user?
Hi, this might be the wrong place to post it but is there any user API to get publicly available information about a user?
Hi
@mrfakename
All the available routes are listed here: https://huggingface.co/docs/hub/api
Can the siblings have file info with them at least the file size and file last modified date
"siblings": [
{
"rfilename": "mixtral-8x7b-instruct-v0.1.Q2_K.gguf",
"filesize": 1342141441,
"lastModified": "2023-12-14T14:30:43.000Z"
}]
This way, we can get info on the size of GGUF model files.
Thanks,
Ash
Hi Ash! You can retrieve file size information with the /tree
endpoint: https://huggingface.co/api/models/TheBloke/Mixtral-8x7B-Instruct-v0.1-GGUF/tree/main. If you need extra information, you can pass ?expand=1
. If you need to list all files in subdirectories, you can pass ?recursive=1
. Since it's a heavy operation server-side, results are paginated. In such a case, you must look at the Links
header in the response.
In huggingface_hub
(the Python client library), you can use list_repo_tree
which does the job for you. Hope this will help :)
Thanks @Wauplin
I was using the HTTP HEAD method on the file that user selected from the list.
This is better but it is still an additional call. Please considering adding the file size info to the model info call like I suggested.
Thanks,
Ash
This is better but it is still an additional call. Please considering adding the file size info to the model info call like I suggested.
This is an extra call because it is also an extra operation on the server-side. If we want to return this information on the model info call, it would increase the response time for the users. Also on the model info call we can't paginate the results as it's currently done on the /tree
endpoint meaning that for models/datasets with hundreds or thousands of files we wouldn't be able to return everything at once in a fair amount of time. Everything lies in the fact that listing files + size + commit data is a "costly" operation (or at least, not a "cached" operation).