I started Friday with decentralized AI using Gemma-2, and it all works without blockchain. This is what I did:
1. Pinned Gemma-2 9B in the Interplanetary Filesystem IPFS with the LoRA fine-tuning adapters. 2. Set up a llama-ipfs server to fetch and cache the model and adapters on the fly and inference locally.
Now, I can use my on device AI platform across:
• All my macOS automation workflows • All my browsers • My Copilot++ in VSCode • My Open Apple Intelligence (OAI, not to be confused with the other closed OAI owned by a nonprofit foundation and BigTech)
The llama-ipfs server’s RPC support lets me decentralize inferencing across all my devices, supercharging computing and energy efficiency.
Make sure you own your AI. AI in the cloud is not aligned with you, it’s aligned with the company that owns it.