{"node":{"id":"urn:cid:bafkr4ien22v3j5s6h22rffqenihyzenyh23bln4ir46mbrk3lqjusgbevi","properties":{"registeredBy":"did:key:z6MkhQD1A9eMQ8bZNGmBiCVz7kG4mfnApD7WjHKNhkZp7HEY","timestamp":"2024-01-29T15:59:03Z","dataRegistrationJcs":"urn:cid:baga6yaq6edtsf5pjack4uz3mwmkr3spgljq2chisbqkf7libfwulrfohxqgby","nodeType":"data"}},"enrichments":{"asset_hub":{"asset_id":107,"asset_name":"laion/CLIP-ViT-H-14-laion2B-s32B-b79K","owning_project":"LAION-2B","asset_description":"A CLIP ViT-H/14 model trained using the LAION-2B English subset of LAION-5B, utilizing OpenCLIP. The model, developed by Romain Beaumont on the stability.ai cluster, is designed for zero-shot, arbitrary image classification and aims to aid research in understanding the potential impact of such models.","asset_format":"OpenCLIP","asset_type":"Model","asset_blob_type":"iroh-collection","source_location_url":"","contact_info":"Refer to Hugging Face's official channels for contact information.","license":"MIT","license_link":"https://doi.org/10.5281/zenodo.5143773","registered_date":"2024-01-29T16:00:32.444151Z","last_modified_date":"2024-01-29T16:00:32.444151Z"}}}